Struct regex_automata::nfa::thompson::NFA
source · pub struct NFA(Arc<Inner>);
Expand description
A byte oriented Thompson non-deterministic finite automaton (NFA).
A Thompson NFA is a finite state machine that permits unconditional epsilon transitions, but guarantees that there exists at most one non-epsilon transition for each element in the alphabet for each state.
An NFA may be used directly for searching, for analysis or to build a deterministic finite automaton (DFA).
§Cheap clones
Since an NFA is a core data type in this crate that many other regex engines are based on top of, it is convenient to give ownership of an NFA to said regex engines. Because of this, an NFA uses reference counting internally. Therefore, it is cheap to clone and it is encouraged to do so.
§Capabilities
Using an NFA for searching via the
PikeVM
provides the most amount
of “power” of any regex engine in this crate. Namely, it supports the
following in all cases:
- Detection of a match.
- Location of a match, including both the start and end offset, in a single pass of the haystack.
- Location of matching capturing groups.
- Handles multiple patterns, including (1)-(3) when multiple patterns are present.
§Capturing Groups
Groups refer to parenthesized expressions inside a regex pattern. They look
like this, where exp
is an arbitrary regex:
(exp)
- An unnamed capturing group.(?P<name>exp)
or(?<name>exp)
- A named capturing group.(?:exp)
- A non-capturing group.(?i:exp)
- A non-capturing group that sets flags.
Only the first two forms are said to be capturing. Capturing
means that the last position at which they match is reportable. The
Captures
type provides convenient
access to the match positions of capturing groups, which includes looking
up capturing groups by their name.
§Byte oriented
This NFA is byte oriented, which means that all of its transitions are defined on bytes. In other words, the alphabet of an NFA consists of the 256 different byte values.
While DFAs nearly demand that they be byte oriented for performance reasons, an NFA could conceivably be Unicode codepoint oriented. Indeed, a previous version of this NFA supported both byte and codepoint oriented modes. A codepoint oriented mode can work because an NFA fundamentally uses a sparse representation of transitions, which works well with the large sparse space of Unicode codepoints.
Nevertheless, this NFA is only byte oriented. This choice is primarily driven by implementation simplicity, and also in part memory usage. In practice, performance between the two is roughly comparable. However, building a DFA (including a hybrid DFA) really wants a byte oriented NFA. So if we do have a codepoint oriented NFA, then we also need to generate byte oriented NFA in order to build an hybrid NFA/DFA. Thus, by only generating byte oriented NFAs, we can produce one less NFA. In other words, if we made our NFA codepoint oriented, we’d need to also make it support a byte oriented mode, which is more complicated. But a byte oriented mode can support everything.
§Differences with DFAs
At the theoretical level, the precise difference between an NFA and a DFA is that, in a DFA, for every state, an input symbol unambiguously refers to a single transition and that an input symbol is required for each transition. At a practical level, this permits DFA implementations to be implemented at their core with a small constant number of CPU instructions for each byte of input searched. In practice, this makes them quite a bit faster than NFAs in general. Namely, in order to execute a search for any Thompson NFA, one needs to keep track of a set of states, and execute the possible transitions on all of those states for each input symbol. Overall, this results in much more overhead. To a first approximation, one can expect DFA searches to be about an order of magnitude faster.
So why use an NFA at all? The main advantage of an NFA is that it takes linear time (in the size of the pattern string after repetitions have been expanded) to build and linear memory usage. A DFA, on the other hand, may take exponential time and/or space to build. Even in non-pathological cases, DFAs often take quite a bit more memory than their NFA counterparts, especially if large Unicode character classes are involved. Of course, an NFA also provides additional capabilities. For example, it can match Unicode word boundaries on non-ASCII text and resolve the positions of capturing groups.
Note that a hybrid::regex::Regex
strikes a
good balance between an NFA and a DFA. It avoids the exponential build time
of a DFA while maintaining its fast search time. The downside of a hybrid
NFA/DFA is that in some cases it can be slower at search time than the NFA.
(It also has less functionality than a pure NFA. It cannot handle Unicode
word boundaries on non-ASCII text and cannot resolve capturing groups.)
§Example
This shows how to build an NFA with the default configuration and execute a search using the Pike VM.
use regex_automata::{nfa::thompson::pikevm::PikeVM, Match};
let re = PikeVM::new(r"foo[0-9]+")?;
let mut cache = re.create_cache();
let mut caps = re.create_captures();
let expected = Some(Match::must(0, 0..8));
re.captures(&mut cache, b"foo12345", &mut caps);
assert_eq!(expected, caps.get_match());
§Example: resolving capturing groups
This example shows how to parse some simple dates and extract the components of each date via capturing groups.
use regex_automata::{
nfa::thompson::pikevm::PikeVM,
util::captures::Captures,
};
let vm = PikeVM::new(r"(?P<y>\d{4})-(?P<m>\d{2})-(?P<d>\d{2})")?;
let mut cache = vm.create_cache();
let haystack = "2012-03-14, 2013-01-01 and 2014-07-05";
let all: Vec<Captures> = vm.captures_iter(
&mut cache, haystack.as_bytes()
).collect();
// There should be a total of 3 matches.
assert_eq!(3, all.len());
// The year from the second match is '2013'.
let span = all[1].get_group_by_name("y").unwrap();
assert_eq!("2013", &haystack[span]);
This example shows that only the last match of a capturing group is reported, even if it had to match multiple times for an overall match to occur.
use regex_automata::{nfa::thompson::pikevm::PikeVM, Span};
let re = PikeVM::new(r"([a-z]){4}")?;
let mut cache = re.create_cache();
let mut caps = re.create_captures();
let haystack = b"quux";
re.captures(&mut cache, haystack, &mut caps);
assert!(caps.is_match());
assert_eq!(Some(Span::from(3..4)), caps.get_group(1));
Tuple Fields§
§0: Arc<Inner>
Implementations§
source§impl NFA
impl NFA
sourcepub fn new(pattern: &str) -> Result<NFA, BuildError>
pub fn new(pattern: &str) -> Result<NFA, BuildError>
Parse the given regular expression using a default configuration and build an NFA from it.
If you want a non-default configuration, then use the NFA
Compiler
with a Config
.
§Example
use regex_automata::{nfa::thompson::pikevm::PikeVM, Match};
let re = PikeVM::new(r"foo[0-9]+")?;
let (mut cache, mut caps) = (re.create_cache(), re.create_captures());
let expected = Some(Match::must(0, 0..8));
re.captures(&mut cache, b"foo12345", &mut caps);
assert_eq!(expected, caps.get_match());
sourcepub fn new_many<P: AsRef<str>>(patterns: &[P]) -> Result<NFA, BuildError>
pub fn new_many<P: AsRef<str>>(patterns: &[P]) -> Result<NFA, BuildError>
Parse the given regular expressions using a default configuration and build a multi-NFA from them.
If you want a non-default configuration, then use the NFA
Compiler
with a Config
.
§Example
use regex_automata::{nfa::thompson::pikevm::PikeVM, Match};
let re = PikeVM::new_many(&["[0-9]+", "[a-z]+"])?;
let (mut cache, mut caps) = (re.create_cache(), re.create_captures());
let expected = Some(Match::must(1, 0..3));
re.captures(&mut cache, b"foo12345bar", &mut caps);
assert_eq!(expected, caps.get_match());
sourcepub fn always_match() -> NFA
pub fn always_match() -> NFA
Returns an NFA with a single regex pattern that always matches at every position.
§Example
use regex_automata::{nfa::thompson::{NFA, pikevm::PikeVM}, Match};
let re = PikeVM::new_from_nfa(NFA::always_match())?;
let (mut cache, mut caps) = (re.create_cache(), re.create_captures());
let expected = Some(Match::must(0, 0..0));
re.captures(&mut cache, b"", &mut caps);
assert_eq!(expected, caps.get_match());
re.captures(&mut cache, b"foo", &mut caps);
assert_eq!(expected, caps.get_match());
sourcepub fn never_match() -> NFA
pub fn never_match() -> NFA
Returns an NFA that never matches at any position.
This is a convenience routine for creating an NFA with zero patterns.
§Example
use regex_automata::nfa::thompson::{NFA, pikevm::PikeVM};
let re = PikeVM::new_from_nfa(NFA::never_match())?;
let (mut cache, mut caps) = (re.create_cache(), re.create_captures());
re.captures(&mut cache, b"", &mut caps);
assert!(!caps.is_match());
re.captures(&mut cache, b"foo", &mut caps);
assert!(!caps.is_match());
sourcepub fn config() -> Config
pub fn config() -> Config
Return a default configuration for an NFA
.
This is a convenience routine to avoid needing to import the Config
type when customizing the construction of an NFA.
§Example
This example shows how to build an NFA with a small size limit that results in a compilation error for any regex that tries to use more heap memory than the configured limit.
use regex_automata::nfa::thompson::{NFA, pikevm::PikeVM};
let result = PikeVM::builder()
.thompson(NFA::config().nfa_size_limit(Some(1_000)))
// Remember, \w is Unicode-aware by default and thus huge.
.build(r"\w+");
assert!(result.is_err());
sourcepub fn compiler() -> Compiler
pub fn compiler() -> Compiler
Return a compiler for configuring the construction of an NFA
.
This is a convenience routine to avoid needing to import the
Compiler
type in common cases.
§Example
This example shows how to build an NFA that is permitted match invalid
UTF-8. Without the additional syntax configuration here, compilation of
(?-u:.)
would fail because it is permitted to match invalid UTF-8.
use regex_automata::{
nfa::thompson::pikevm::PikeVM,
util::syntax,
Match,
};
let re = PikeVM::builder()
.syntax(syntax::Config::new().utf8(false))
.build(r"[a-z]+(?-u:.)")?;
let (mut cache, mut caps) = (re.create_cache(), re.create_captures());
let expected = Some(Match::must(0, 1..5));
re.captures(&mut cache, b"\xFFabc\xFF", &mut caps);
assert_eq!(expected, caps.get_match());
sourcepub fn patterns(&self) -> PatternIter<'_> ⓘ
pub fn patterns(&self) -> PatternIter<'_> ⓘ
Returns an iterator over all pattern identifiers in this NFA.
Pattern IDs are allocated in sequential order starting from zero,
where the order corresponds to the order of patterns provided to the
NFA::new_many
constructor.
§Example
use regex_automata::{nfa::thompson::NFA, PatternID};
let nfa = NFA::new_many(&["[0-9]+", "[a-z]+", "[A-Z]+"])?;
let pids: Vec<PatternID> = nfa.patterns().collect();
assert_eq!(pids, vec![
PatternID::must(0),
PatternID::must(1),
PatternID::must(2),
]);
sourcepub fn pattern_len(&self) -> usize
pub fn pattern_len(&self) -> usize
Returns the total number of regex patterns in this NFA.
This may return zero if the NFA was constructed with no patterns. In this case, the NFA can never produce a match for any input.
This is guaranteed to be no bigger than PatternID::LIMIT
because
NFA construction will fail if too many patterns are added.
It is always true that nfa.patterns().count() == nfa.pattern_len()
.
§Example
use regex_automata::nfa::thompson::NFA;
let nfa = NFA::new_many(&["[0-9]+", "[a-z]+", "[A-Z]+"])?;
assert_eq!(3, nfa.pattern_len());
let nfa = NFA::never_match();
assert_eq!(0, nfa.pattern_len());
let nfa = NFA::always_match();
assert_eq!(1, nfa.pattern_len());
sourcepub fn start_anchored(&self) -> StateID
pub fn start_anchored(&self) -> StateID
Return the state identifier of the initial anchored state of this NFA.
The returned identifier is guaranteed to be a valid index into the
slice returned by NFA::states
, and is also a valid argument to
NFA::state
.
§Example
This example shows a somewhat contrived example where we can easily predict the anchored starting state.
use regex_automata::nfa::thompson::{NFA, State, WhichCaptures};
let nfa = NFA::compiler()
.configure(NFA::config().which_captures(WhichCaptures::None))
.build("a")?;
let state = nfa.state(nfa.start_anchored());
match *state {
State::ByteRange { trans } => {
assert_eq!(b'a', trans.start);
assert_eq!(b'a', trans.end);
}
_ => unreachable!("unexpected state"),
}
sourcepub fn start_unanchored(&self) -> StateID
pub fn start_unanchored(&self) -> StateID
Return the state identifier of the initial unanchored state of this NFA.
This is equivalent to the identifier returned by
NFA::start_anchored
when the NFA has no unanchored starting state.
The returned identifier is guaranteed to be a valid index into the
slice returned by NFA::states
, and is also a valid argument to
NFA::state
.
§Example
This example shows that the anchored and unanchored starting states are equivalent when an anchored NFA is built.
use regex_automata::nfa::thompson::NFA;
let nfa = NFA::new("^a")?;
assert_eq!(nfa.start_anchored(), nfa.start_unanchored());
sourcepub fn start_pattern(&self, pid: PatternID) -> Option<StateID>
pub fn start_pattern(&self, pid: PatternID) -> Option<StateID>
Return the state identifier of the initial anchored state for the given
pattern, or None
if there is no pattern corresponding to the given
identifier.
If one uses the starting state for a particular pattern, then the only match that can be returned is for the corresponding pattern.
The returned identifier is guaranteed to be a valid index into the
slice returned by NFA::states
, and is also a valid argument to
NFA::state
.
§Errors
If the pattern doesn’t exist in this NFA, then this returns an error.
This occurs when pid.as_usize() >= nfa.pattern_len()
.
§Example
This example shows that the anchored and unanchored starting states are equivalent when an anchored NFA is built.
use regex_automata::{nfa::thompson::NFA, PatternID};
let nfa = NFA::new_many(&["^a", "^b"])?;
// The anchored and unanchored states for the entire NFA are the same,
// since all of the patterns are anchored.
assert_eq!(nfa.start_anchored(), nfa.start_unanchored());
// But the anchored starting states for each pattern are distinct,
// because these starting states can only lead to matches for the
// corresponding pattern.
let anchored = Some(nfa.start_anchored());
assert_ne!(anchored, nfa.start_pattern(PatternID::must(0)));
assert_ne!(anchored, nfa.start_pattern(PatternID::must(1)));
// Requesting a pattern not in the NFA will result in None:
assert_eq!(None, nfa.start_pattern(PatternID::must(2)));
sourcepub(crate) fn byte_class_set(&self) -> &ByteClassSet
pub(crate) fn byte_class_set(&self) -> &ByteClassSet
Get the byte class set for this NFA.
A byte class set is a partitioning of this NFA’s alphabet into equivalence classes. Any two bytes in the same equivalence class are guaranteed to never discriminate between a match or a non-match. (The partitioning may not be minimal.)
Byte classes are used internally by this crate when building DFAs. Namely, among other optimizations, they enable a space optimization where the DFA’s internal alphabet is defined over the equivalence classes of bytes instead of all possible byte values. The former is often quite a bit smaller than the latter, which permits the DFA to use less space for its transition table.
sourcepub fn byte_classes(&self) -> &ByteClasses
pub fn byte_classes(&self) -> &ByteClasses
Get the byte classes for this NFA.
Byte classes represent a partitioning of this NFA’s alphabet into equivalence classes. Any two bytes in the same equivalence class are guaranteed to never discriminate between a match or a non-match. (The partitioning may not be minimal.)
Byte classes are used internally by this crate when building DFAs. Namely, among other optimizations, they enable a space optimization where the DFA’s internal alphabet is defined over the equivalence classes of bytes instead of all possible byte values. The former is often quite a bit smaller than the latter, which permits the DFA to use less space for its transition table.
§Example
This example shows how to query the class of various bytes.
use regex_automata::nfa::thompson::NFA;
let nfa = NFA::new("[a-z]+")?;
let classes = nfa.byte_classes();
// 'a' and 'z' are in the same class for this regex.
assert_eq!(classes.get(b'a'), classes.get(b'z'));
// But 'a' and 'A' are not.
assert_ne!(classes.get(b'a'), classes.get(b'A'));
sourcepub fn state(&self, id: StateID) -> &State
pub fn state(&self, id: StateID) -> &State
Return a reference to the NFA state corresponding to the given ID.
This is a convenience routine for nfa.states()[id]
.
§Panics
This panics when the given identifier does not reference a valid state.
That is, when id.as_usize() >= nfa.states().len()
.
§Example
The anchored state for a pattern will typically correspond to a capturing state for that pattern. (Although, this is not an API guarantee!)
use regex_automata::{nfa::thompson::{NFA, State}, PatternID};
let nfa = NFA::new("a")?;
let state = nfa.state(nfa.start_pattern(PatternID::ZERO).unwrap());
match *state {
State::Capture { slot, .. } => {
assert_eq!(0, slot.as_usize());
}
_ => unreachable!("unexpected state"),
}
sourcepub fn states(&self) -> &[State]
pub fn states(&self) -> &[State]
Returns a slice of all states in this NFA.
The slice returned is indexed by StateID
. This provides a convenient
way to access states while following transitions among those states.
§Example
This demonstrates that disabling UTF-8 mode can shrink the size of the NFA considerably in some cases, especially when using Unicode character classes.
use regex_automata::nfa::thompson::NFA;
let nfa_unicode = NFA::new(r"\w")?;
let nfa_ascii = NFA::new(r"(?-u)\w")?;
// Yes, a factor of 45 difference. No lie.
assert!(40 * nfa_ascii.states().len() < nfa_unicode.states().len());
sourcepub fn group_info(&self) -> &GroupInfo
pub fn group_info(&self) -> &GroupInfo
Returns the capturing group info for this NFA.
The GroupInfo
provides a way to map to and from capture index
and capture name for each pattern. It also provides a mapping from
each of the capturing groups in every pattern to their corresponding
slot offsets encoded in State::Capture
states.
Note that GroupInfo
uses reference counting internally, such that
cloning a GroupInfo
is very cheap.
§Example
This example shows how to get a list of all capture group names for a particular pattern.
use regex_automata::{nfa::thompson::NFA, PatternID};
let nfa = NFA::new(r"(a)(?P<foo>b)(c)(d)(?P<bar>e)")?;
// The first is the implicit group that is always unnammed. The next
// 5 groups are the explicit groups found in the concrete syntax above.
let expected = vec![None, None, Some("foo"), None, None, Some("bar")];
let got: Vec<Option<&str>> =
nfa.group_info().pattern_names(PatternID::ZERO).collect();
assert_eq!(expected, got);
// Using an invalid pattern ID will result in nothing yielded.
let got = nfa.group_info().pattern_names(PatternID::must(999)).count();
assert_eq!(0, got);
sourcepub fn has_capture(&self) -> bool
pub fn has_capture(&self) -> bool
Returns true if and only if this NFA has at least one
Capture
in its sequence of states.
This is useful as a way to perform a quick test before attempting something that does or does not require capture states. For example, some regex engines (like the PikeVM) require capture states in order to work at all.
§Example
This example shows a few different NFAs and whether they have captures or not.
use regex_automata::nfa::thompson::{NFA, WhichCaptures};
// Obviously has capture states.
let nfa = NFA::new("(a)")?;
assert!(nfa.has_capture());
// Less obviously has capture states, because every pattern has at
// least one anonymous capture group corresponding to the match for the
// entire pattern.
let nfa = NFA::new("a")?;
assert!(nfa.has_capture());
// Other than hand building your own NFA, this is the only way to build
// an NFA without capturing groups. In general, you should only do this
// if you don't intend to use any of the NFA-oriented regex engines.
// Overall, capturing groups don't have many downsides. Although they
// can add a bit of noise to simple NFAs, so it can be nice to disable
// them for debugging purposes.
//
// Notice that 'has_capture' is false here even when we have an
// explicit capture group in the pattern.
let nfa = NFA::compiler()
.configure(NFA::config().which_captures(WhichCaptures::None))
.build("(a)")?;
assert!(!nfa.has_capture());
sourcepub fn has_empty(&self) -> bool
pub fn has_empty(&self) -> bool
Returns true if and only if this NFA can match the empty string. When it returns false, all possible matches are guaranteed to have a non-zero length.
This is useful as cheap way to know whether code needs to handle the case of a zero length match. This is particularly important when UTF-8 modes are enabled, as when UTF-8 mode is enabled, empty matches that split a codepoint must never be reported. This extra handling can sometimes be costly, and since regexes matching an empty string are somewhat rare, it can be beneficial to treat such regexes specially.
§Example
This example shows a few different NFAs and whether they match the
empty string or not. Notice the empty string isn’t merely a matter
of a string of length literally 0
, but rather, whether a match can
occur between specific pairs of bytes.
use regex_automata::{nfa::thompson::NFA, util::syntax};
// The empty regex matches the empty string.
let nfa = NFA::new("")?;
assert!(nfa.has_empty(), "empty matches empty");
// The '+' repetition operator requires at least one match, and so
// does not match the empty string.
let nfa = NFA::new("a+")?;
assert!(!nfa.has_empty(), "+ does not match empty");
// But the '*' repetition operator does.
let nfa = NFA::new("a*")?;
assert!(nfa.has_empty(), "* does match empty");
// And wrapping '+' in an operator that can match an empty string also
// causes it to match the empty string too.
let nfa = NFA::new("(a+)*")?;
assert!(nfa.has_empty(), "+ inside of * matches empty");
// If a regex is just made of a look-around assertion, even if the
// assertion requires some kind of non-empty string around it (such as
// \b), then it is still treated as if it matches the empty string.
// Namely, if a match occurs of just a look-around assertion, then the
// match returned is empty.
let nfa = NFA::compiler()
.syntax(syntax::Config::new().utf8(false))
.build(r"^$\A\z\b\B(?-u:\b\B)")?;
assert!(nfa.has_empty(), "assertions match empty");
// Even when an assertion is wrapped in a '+', it still matches the
// empty string.
let nfa = NFA::new(r"\b+")?;
assert!(nfa.has_empty(), "+ of an assertion matches empty");
// An alternation with even one branch that can match the empty string
// is also said to match the empty string overall.
let nfa = NFA::new("foo|(bar)?|quux")?;
assert!(nfa.has_empty(), "alternations can match empty");
// An NFA that matches nothing does not match the empty string.
let nfa = NFA::new("[a&&b]")?;
assert!(!nfa.has_empty(), "never matching means not matching empty");
// But if it's wrapped in something that doesn't require a match at
// all, then it can match the empty string!
let nfa = NFA::new("[a&&b]*")?;
assert!(nfa.has_empty(), "* on never-match still matches empty");
// Since a '+' requires a match, using it on something that can never
// match will itself produce a regex that can never match anything,
// and thus does not match the empty string.
let nfa = NFA::new("[a&&b]+")?;
assert!(!nfa.has_empty(), "+ on never-match still matches nothing");
sourcepub fn is_utf8(&self) -> bool
pub fn is_utf8(&self) -> bool
Whether UTF-8 mode is enabled for this NFA or not.
When UTF-8 mode is enabled, all matches reported by a regex engine derived from this NFA are guaranteed to correspond to spans of valid UTF-8. This includes zero-width matches. For example, the regex engine must guarantee that the empty regex will not match at the positions between code units in the UTF-8 encoding of a single codepoint.
See Config::utf8
for more information.
This is enabled by default.
§Example
This example shows how UTF-8 mode can impact the match spans that may be reported in certain cases.
use regex_automata::{
nfa::thompson::{self, pikevm::PikeVM},
Match, Input,
};
let re = PikeVM::new("")?;
let (mut cache, mut caps) = (re.create_cache(), re.create_captures());
// UTF-8 mode is enabled by default.
let mut input = Input::new("☃");
re.search(&mut cache, &input, &mut caps);
assert_eq!(Some(Match::must(0, 0..0)), caps.get_match());
// Even though an empty regex matches at 1..1, our next match is
// 3..3 because 1..1 and 2..2 split the snowman codepoint (which is
// three bytes long).
input.set_start(1);
re.search(&mut cache, &input, &mut caps);
assert_eq!(Some(Match::must(0, 3..3)), caps.get_match());
// But if we disable UTF-8, then we'll get matches at 1..1 and 2..2:
let re = PikeVM::builder()
.thompson(thompson::Config::new().utf8(false))
.build("")?;
re.search(&mut cache, &input, &mut caps);
assert_eq!(Some(Match::must(0, 1..1)), caps.get_match());
input.set_start(2);
re.search(&mut cache, &input, &mut caps);
assert_eq!(Some(Match::must(0, 2..2)), caps.get_match());
input.set_start(3);
re.search(&mut cache, &input, &mut caps);
assert_eq!(Some(Match::must(0, 3..3)), caps.get_match());
input.set_start(4);
re.search(&mut cache, &input, &mut caps);
assert_eq!(None, caps.get_match());
sourcepub fn is_reverse(&self) -> bool
pub fn is_reverse(&self) -> bool
Returns true when this NFA is meant to be matched in reverse.
Generally speaking, when this is true, it means the NFA is supposed to be used in conjunction with moving backwards through the haystack. That is, from a higher memory address to a lower memory address.
It is often the case that lower level routines dealing with an NFA
don’t need to care about whether it is “meant” to be matched in reverse
or not. However, there are some specific cases where it matters. For
example, the implementation of CRLF-aware ^
and $
line anchors
needs to know whether the search is in the forward or reverse
direction. In the forward direction, neither ^
nor $
should match
when a \r
has been seen previously and a \n
is next. However, in
the reverse direction, neither ^
nor $
should match when a \n
has been seen previously and a \r
is next. This fundamentally changes
how the state machine is constructed, and thus needs to be altered
based on the direction of the search.
This is automatically set when using a Compiler
with a configuration
where Config::reverse
is enabled. If you’re building your own NFA
by hand via a Builder
sourcepub fn is_always_start_anchored(&self) -> bool
pub fn is_always_start_anchored(&self) -> bool
Returns true if and only if all starting states for this NFA correspond to the beginning of an anchored search.
Typically, an NFA will have both an anchored and an unanchored starting state. Namely, because it tends to be useful to have both and the cost of having an unanchored starting state is almost zero (for an NFA). However, if all patterns in the NFA are themselves anchored, then even the unanchored starting state will correspond to an anchored search since the pattern doesn’t permit anything else.
§Example
This example shows a few different scenarios where this method’s return value varies.
use regex_automata::nfa::thompson::NFA;
// The unanchored starting state permits matching this pattern anywhere
// in a haystack, instead of just at the beginning.
let nfa = NFA::new("a")?;
assert!(!nfa.is_always_start_anchored());
// In this case, the pattern is itself anchored, so there is no way
// to run an unanchored search.
let nfa = NFA::new("^a")?;
assert!(nfa.is_always_start_anchored());
// When multiline mode is enabled, '^' can match at the start of a line
// in addition to the start of a haystack, so an unanchored search is
// actually possible.
let nfa = NFA::new("(?m)^a")?;
assert!(!nfa.is_always_start_anchored());
// Weird cases also work. A pattern is only considered anchored if all
// matches may only occur at the start of a haystack.
let nfa = NFA::new("(^a)|a")?;
assert!(!nfa.is_always_start_anchored());
// When multiple patterns are present, if they are all anchored, then
// the NFA is always anchored too.
let nfa = NFA::new_many(&["^a", "^b", "^c"])?;
assert!(nfa.is_always_start_anchored());
// But if one pattern is unanchored, then the NFA must permit an
// unanchored search.
let nfa = NFA::new_many(&["^a", "b", "^c"])?;
assert!(!nfa.is_always_start_anchored());
sourcepub fn look_matcher(&self) -> &LookMatcher
pub fn look_matcher(&self) -> &LookMatcher
Returns the look-around matcher associated with this NFA.
A look-around matcher determines how to match look-around assertions.
In particular, some assertions are configurable. For example, the
(?m:^)
and (?m:$)
assertions can have their line terminator changed
from the default of \n
to any other byte.
If the NFA was built using a Compiler
, then this matcher
can be set via the Config::look_matcher
configuration
knob. Otherwise, if you’ve built an NFA by hand, it is set via
Builder::set_look_matcher
.
§Example
This shows how to change the line terminator for multi-line assertions.
use regex_automata::{
nfa::thompson::{self, pikevm::PikeVM},
util::look::LookMatcher,
Match, Input,
};
let mut lookm = LookMatcher::new();
lookm.set_line_terminator(b'\x00');
let re = PikeVM::builder()
.thompson(thompson::Config::new().look_matcher(lookm))
.build(r"(?m)^[a-z]+$")?;
let mut cache = re.create_cache();
// Multi-line assertions now use NUL as a terminator.
assert_eq!(
Some(Match::must(0, 1..4)),
re.find(&mut cache, b"\x00abc\x00"),
);
// ... and \n is no longer recognized as a terminator.
assert_eq!(
None,
re.find(&mut cache, b"\nabc\n"),
);
sourcepub fn look_set_any(&self) -> LookSet
pub fn look_set_any(&self) -> LookSet
Returns the union of all look-around assertions used throughout this NFA. When the returned set is empty, it implies that the NFA has no look-around assertions and thus zero conditional epsilon transitions.
This is useful in some cases enabling optimizations. It is not unusual, for example, for optimizations to be of the form, “for any regex with zero conditional epsilon transitions, do …” where “…” is some kind of optimization.
This isn’t only helpful for optimizations either. Sometimes look-around assertions are difficult to support. For example, many of the DFAs in this crate don’t support Unicode word boundaries or handle them using heuristics. Handling that correctly typically requires some kind of cheap check of whether the NFA has a Unicode word boundary in the first place.
§Example
This example shows how this routine varies based on the regex pattern:
use regex_automata::{nfa::thompson::NFA, util::look::Look};
// No look-around at all.
let nfa = NFA::new("a")?;
assert!(nfa.look_set_any().is_empty());
// When multiple patterns are present, since this returns the union,
// it will include look-around assertions that only appear in one
// pattern.
let nfa = NFA::new_many(&["a", "b", "a^b", "c"])?;
assert!(nfa.look_set_any().contains(Look::Start));
// Some groups of assertions have various shortcuts. For example:
let nfa = NFA::new(r"(?-u:\b)")?;
assert!(nfa.look_set_any().contains_word());
assert!(!nfa.look_set_any().contains_word_unicode());
assert!(nfa.look_set_any().contains_word_ascii());
sourcepub fn look_set_prefix_any(&self) -> LookSet
pub fn look_set_prefix_any(&self) -> LookSet
Returns the union of all prefix look-around assertions for every pattern in this NFA. When the returned set is empty, it implies none of the patterns require moving through a conditional epsilon transition before inspecting the first byte in the haystack.
This can be useful for determining what kinds of assertions need to be satisfied at the beginning of a search. For example, typically DFAs in this crate will build a distinct starting state for each possible starting configuration that might result in look-around assertions being satisfied differently. However, if the set returned here is empty, then you know that the start state is invariant because there are no conditional epsilon transitions to consider.
§Example
This example shows how this routine varies based on the regex pattern:
use regex_automata::{nfa::thompson::NFA, util::look::Look};
// No look-around at all.
let nfa = NFA::new("a")?;
assert!(nfa.look_set_prefix_any().is_empty());
// When multiple patterns are present, since this returns the union,
// it will include look-around assertions that only appear in one
// pattern. But it will only include assertions that are in the prefix
// of a pattern. For example, this includes '^' but not '$' even though
// '$' does appear.
let nfa = NFA::new_many(&["a", "b", "^ab$", "c"])?;
assert!(nfa.look_set_prefix_any().contains(Look::Start));
assert!(!nfa.look_set_prefix_any().contains(Look::End));
sourcepub fn memory_usage(&self) -> usize
pub fn memory_usage(&self) -> usize
Returns the memory usage, in bytes, of this NFA.
This does not include the stack size used up by this NFA. To
compute that, use std::mem::size_of::<NFA>()
.
§Example
This example shows that large Unicode character classes can use quite a bit of memory.
use regex_automata::nfa::thompson::NFA;
let nfa_unicode = NFA::new(r"\w")?;
let nfa_ascii = NFA::new(r"(?-u:\w)")?;
assert!(10 * nfa_ascii.memory_usage() < nfa_unicode.memory_usage());
Trait Implementations§
Auto Trait Implementations§
impl Freeze for NFA
impl RefUnwindSafe for NFA
impl Send for NFA
impl Sync for NFA
impl Unpin for NFA
impl UnwindSafe for NFA
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§unsafe fn clone_to_uninit(&self, dst: *mut T)
unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)