Age | Commit message (Collapse) | Author |
|
|
|
|
|
API change. Sorry, but this is the time to break things,
before 1.0 is released. This matches the recent changes to
CommonMark.dtd.
|
|
Ultimately I think we can get rid of parser->curline and
avoid an unnecessary allocation per line.
|
|
CMARK_NODE_HRULE -> CMARK_NODE_THEMATIC_BREAK.
However we've defined the former as the latter to keep
backwards compatibility.
See jgm/CommonMark 8fa94cb460f5e516b0e57adca33f50a669d51f6c
|
|
Defined CMARK_NODE_HEADER to CMARK_NODE_HEADING to ease
the transition.
|
|
See jgm/CommonMark commit 0cdbcee4e840abd0ac7db93797b2b75ca4104314
Note that we have defined
cmark_node_get_header_level = cmark_node_get_heading_level
and
cmark_node_set_header_level = camrk_node_set_heading_level
for backwards compatibility in the API.
|
|
|
|
|
|
This previously caused cmark to break out of a list,
thinking it had two consecutive blanks.
|
|
|
|
So `S_process_line` sees only unix style line endings.
Closes #72, avoiding mixed line endings.
Ultimately we probably want a better solution, allowing
the line ending style of the input file to be preserved.
This solution forces output with newlines.
|
|
Closes #71.
Added a test to api_test.
|
|
|
|
See jgm/CommonMark#332
|
|
* Reformatted all source files.
* Added 'format' target to Makefile.
* Removed 'astyle' target.
* Updated .editorconfig.
|
|
|
|
|
|
This should be added to the spec.
|
|
(It uses GNU extensions, and we don't need it anyway.)
|
|
* Rewrote spec for HTML blocks. A few other spec examples
also changed as a result.
* Removed old `html_block_tag` scanner. Added new
`html_block_start` and `html_block_start_7`, as well
as `html_block_end_n` for n = 1-5.
* Rewrote block parser for new HTML block spec.
|
|
|
|
This caused certain NULLs not to be replaced.
Found my 'make fuzztest'.
|
|
Also command line option `--validate-utf8`.
This option causes cmark to check for valid UTF-8,
replacing invalid sequences with the replacement
character, U+FFFD.
Reinstated api tests for utf8.
|
|
|
|
We now replace null characters in the line splitting code.
|
|
Now it just replaces bad UTF-8 sequences and NULLs.
This restores benchmarks to near their previous levels.
|
|
We no longer preprocess tabs to spaces before parsing.
Instead, we keep track of both the byte offset and
the (virtual) column as we parse block starts.
This allows us to handle tabs without converting
to spaces first. Tabs are left as tabs in the output.
Added `column` and `first_nonspace_column` fields to `parser`.
Added utility function to advance the offset, computing
the virtual column too.
Note that we don't need to deal with UTF-8 here at all.
Only ASCII occurs in block starts.
Significant performance improvement due to the fact that
we're not doing UTF-8 validation -- though we might want
to add that back in.
|
|
|
|
This isn't actually needed.
|
|
Guard against too large chunks passed via the API.
|
|
There are probably a couple of places I missed. But this will only
be a problem if we use a 64-bit bufsize_t at some point. Then, we'll
get warnings from -Wshorten-64-to-32.
|
|
|
|
Added fields `offset`, `first_nonspace`, `indent`, and `blank`
to `cmark_parser` struct.
This just removes some repetition in the code.
|
|
|
|
This fixes cases like:
```
1. a
2. b
3. c
```
|
|
|
|
From btrask's alternate code in the comment on
https://github.com/jgm/cmark/pull/18.
Note: this gives a 1-2% performance boot in our benchmark,
probably enough to make it worth while.
|
|
Conflicts:
src/blocks.c
|
|
Closes #52.
|
|
|
|
|
|
|
|
|
|
|
|
By the time we check for a list start, we've already checked
for an HRULE, so we don't need to repeat that check here.
Thanks to Robin Stocker for pointing out a similar redundancy
in commonmark.js.
|
|
|
|
Closes #9, confirmed with ASAN.
Avoid using `parser->current` in the loop that creates new
blocks, since `finalize` in `add_child` may have removed
the current parser (if it contains only reference definitions).
This isn't a great solution; in the long run we need to rewrite
to make the logic clearer and to make it harder to make
mistakes like this one.
|
|
This arose when a paragraph containing only reference links and
blank space was finalized. Finalization would remove the
node. `finalize` returns the parent node, but the problem
arose because we had both `cur` and `parser->current`, and
only one was being updated. Solution: remove `cur`, which is
a holdover from before we had `parser->current`.
I believe this will close #9 -- @JordanMilne can you test and confirm?
|
|
For consistency with the API.
|