f:t UdZddlZddlZddlmZddlmZddlm Z ddl m Z ddl mZdd lmZej"r ddlZdd lmZed Zej.ej0d fed <ej4dZej4dZej4dej:Zej4dej>ej@zZ!ej4dej>ej@zZ"e dZ#e dZ$e dZ%e dZ&e dZ'e dZ(e dZ)e dZ*e dZ+e dZ,e dZ-e dZ.e dZ/e d Z0e d!Z1e d"Z2e d#Z3e d$Z4e d%Z5e d&Z6e d'Z7e d(Z8e d)Z9e d*Z:e d+Z;e d,Z<e d-Z=e d.Z>e d/Z?e d0Z@e d1ZAe d2ZBe d3ZCe d4ZDe d5ZEe d6ZFe d7ZGe d8ZHe d9ZIe d:ZJe d;ZKe d<ZLe d=ZMe d>ZNe d?ZOe d@ZPe dAZQe dBZRe dCZSidDe#dEe;dFe'dGe*dHe3dIe2dJe6dKe<dLe.dMe8dNe/dOe9dPe-dQe7dRe)dSe4dTe+e,e0e1e$e(e%e5e&e:dU ZTeTjDcic]\}}|| c}}ZVeWeTeWeVk(sJdVej4dNdWjdXeYeTdYZDdOZZe[eIeKeJe=eNeOePgZ\e[e=eQeKePgZ]d[e^d\e^fd]Z_d^d_d\e^fd`Z`dae^d\e^fdbZadce^d\ebfddZcdedfd\ejej0e^e^ffdgZeGdhdiZfGdjd_ejZhGdkdlZiGdmdnZjdudoZkGdpdqelZmGdrdsejZnGdtd Zoycc}}w)vzImplements a Jinja / Python combination lexer. The ``Lexer`` class is used to do some preprocessing. It filters out invalid operators like the bitshift operators we don't allow in templates. It separates template code and python code in expressions. N) literal_eval)deque)intern)pattern)TemplateSyntaxError)LRUCache) Environment2Lexer _lexer_cachez\s+z (\r\n|\r|\n)z7('([^'\\]*(?:\\.[^'\\]*)*)'|"([^"\\]*(?:\\.[^"\\]*)*)")z ( 0b(_?[0-1])+ # binary | 0o(_?[0-7])+ # octal | 0x(_?[\da-f])+ # hex | [1-9](_?\d)* # decimal | 0(_?0)* # decimal zero ) z (?) z>=rZsP!1Ps!ct| SrT)lenrXs rYr^s Aw)key token_typereturnc|tvr t|Stdtdtdtdt dt dtdtdtdtd td td i j||S) Nzbegin of commentzend of commentr6zbegin of statement blockzend of statement blockzbegin of print statementzend of print statementzbegin of line statementzend of line statementztemplate data / textzend of template)reverse_operatorsTOKEN_COMMENT_BEGINTOKEN_COMMENT_END TOKEN_COMMENTTOKEN_LINECOMMENTTOKEN_BLOCK_BEGINTOKEN_BLOCK_ENDTOKEN_VARIABLE_BEGINTOKEN_VARIABLE_ENDTOKEN_LINESTATEMENT_BEGINTOKEN_LINESTATEMENT_END TOKEN_DATA TOKEN_EOFget)ras rY_describe_token_typerrsv&& ,, /+y95184!#<!8*$  c*j! "r_tokenTokencj|jtk(r |jSt|jS)z#Returns a description of the token.)type TOKEN_NAMEvaluerr)rss rYdescribe_tokenrys' zzZ{{  ++r_exprcdd|vr |jdd\}}|tk(r|S|}t|S)z0Like `describe_token` but for token expressions.rOr)splitrwrr)rzrvrxs rYdescribe_token_exprr}s: d{jja( e : L  %%r_rxc>ttj|S)zsCount the number of newline characters in the string. This is useful for extensions that filter a stream. )r\ newline_refindall)rxs rYcount_newlinesrs z!!%( ))r_ environmentr ctj}t|jt||jft|j t ||j ft|jt||jfg}|j>|jt|jtd||jzf|j>|jt|jtd||jzft|dDcgc]}|dd c}Scc}w)zACompiles all the rules from the environment into a list of rules.Nz ^[ \t\v]*z(?:^|(?<=\S))[^\S\r\n]*T)reverser)rUrVr\comment_start_stringreblock_start_stringrivariable_start_stringrkline_statement_prefixappendrmline_comment_prefixTOKEN_LINECOMMENT_BEGINsorted)rerulesrXs rY compile_rulesrs- A  00 1  k.. /  .. /  k,, -  11 2 k// 0  E$((4 K556)q!B!BCC  &&2 K334'*Q{/N/N-OO  "%6 7aAabE 77 7s7 EcVeZdZdZefdedejeddfdZde dedd fd Z y) FailurezjClass that raises a `TemplateSyntaxError` if called. Used by the `Lexer` to specify known errors. messageclsrbNc ||_||_yrT)r error_class)selfrrs rY__init__zFailure.__init__s r_linenofilenamez te.NoReturnc<|j|j||rT)rr)rrrs rY__call__zFailure.__call__ st||VX>>r_) __name__ __module__ __qualname____doc__rstrtTyperintrr_rYrrsO @S!"(;!<  ?s?c?m?r_rcXeZdZUeed<eed<eed<defdZdedefdZdedefd Z y ) rtrrvrxrbct|SrT)ryrs rY__str__z Token.__str__s d##r_rzc|j|k(ryd|vr+|jdd|j|jgk(Sy)zTest a token against a token expression. This can either be a token type or ``'token_type:token_value'``. This can only test against string values and types. TrOrF)rvr|rxrrzs rYtestz Token.tests@ 99  $;::c1%$))TZZ)@@ @r_iterablec,tfd|DS)z(Test against multiple token expressions.c3@K|]}j|ywrT)r)rWrzrs rYrZz!Token.test_any..&s8t499T?8s)any)rrs` rYtest_anyzToken.test_any$s8x888r_N) rrrr__annotations__rrboolrrrr_rYrtrt sC K I J$$   9#9$9r_c,eZdZdZddZddZdefdZy) TokenStreamIteratorz`The iterator for tokenstreams. Iterate over the stream until the eof token is reached. rbNc||_yrT)stream)rrs rYrzTokenStreamIterator.__init__.s  r_c|SrTrrs rY__iter__zTokenStreamIterator.__iter__1s r_c|jj}|jtur |jj t t |j|SrT)rcurrentrvrpclose StopIterationnextrrss rY__next__zTokenStreamIterator.__next__4sD ## :: " KK     T[[ r_)r TokenStreamrbN)rbr)rrrrrrrtrrr_rYrr)s%r_rc,eZdZdZdej edejedejefdZ de fdZ de fdZ ede fd Zd edd fd Zdefd Zddedd fdZdedejefdZdede fdZdefdZddZdedefdZy )rzA token stream is an iterable that yields :class:`Token`\s. The parser however does not iterate over it but calls :meth:`next` to go one token ahead. The current active token is stored as :attr:`current`. generatorr+rct||_t|_||_||_d|_tdtd|_ t|y)NFr) iter_iterr_pushedr+rclosedrt TOKEN_INITIALrr)rrr+rs rYrzTokenStream.__init__EsD )_ */'     Q r2  T r_rbct|SrT)rrs rYrzTokenStream.__iter__Ss "4((r_cht|jxs|jjtuSrT)rrrrvrprs rY__bool__zTokenStream.__bool__Vs%DLL!GT\\%6%6i%GGr_c| S)z Are we at the end of the stream?rrs rYeoszTokenStream.eosYs xr_rsNc:|jj|y)z Push a token back to the stream.N)rrrs rYpushzTokenStream.push^s E"r_cdt|}|j}|j|||_|S)zLook at the next token.)rrr)r old_tokenresults rYlookzTokenStream.lookbs,J  &   r_nc:t|D] }t|y)zGot n tokens ahead.N)ranger)rr_s rYskipzTokenStream.skipjsqA Jr_rzcP|jj|r t|Sy)zqPerform the token test and return the token if it matched. Otherwise the return value is `None`. N)rrrrs rYnext_ifzTokenStream.next_ifos# <<  T ": r_c(|j|duS)z8Like :meth:`next_if` but only returns `True` or `False`.N)rrs rYskip_ifzTokenStream.skip_ifxs||D!--r_c*|j}|jr!|jj|_|S|jjtur t |j |_|S|S#t$r|jY|SwxYw)z|Go one token ahead and return the old one. Use the built-in :func:`next` instead of calling this directly. ) rrpopleftrvrprrrr)rrvs rYrzTokenStream.__next__|s \\ <<<<//1DL \\  i / #DJJ/  r !    sA55BBct|jjtd|_t d|_d|_y)zClose the stream.rrTN)rtrrrprrrrs rYrzTokenStream.closes.T\\00)R@ "X  r_c|jj|st|}|jjtur:t d|d|jj |j|jt d|dt|j|jj |j|jt|S)z}Expect a given token type and return it. This accepts the same argument as :meth:`jinja2.lexer.Token.test`. z%unexpected end of template, expected rNzexpected token z, got ) rrr}rvrprrr+rryrrs rYexpectzTokenStream.expects||  &&t,D||  I-);D81ELL''IIMM &!$t||0L/OP ##   Dzr_)r)rbN)rrrrrIterablertOptionalrrrrrrpropertyrrrrrrrrrrrr_rYrr?s ::e$ jjo **S/ )-)H$HT#%#D#ec$ CAJJu$5.C.D.%" 35r_rc h|j|j|j|j|j|j |j |j|j|j|j|jf }tj|}|t|xt|<}|S)z(Return a lexer which is probably cached.)rblock_end_stringrvariable_end_stringrcomment_end_stringrr trim_blocks lstrip_blocksnewline_sequencekeep_trailing_newliner rqr )rr`lexers rY get_lexerrs &&$$))''((&&))''!!$$)) C   S !E }$)+$66 SE Lr_c&eZdZdZdZfdZxZS)OptionalLStripzWA special tuple for marking a point in the state that can have lstrip applied. rc$t|||SrT)super__new__)rmemberskwargs __class__s rYrzOptionalLStrip.__new__swsG,,r_)rrrr __slots__r __classcell__)rs@rYrrsI--r_rceZdZUejeed<ejeejedfeje fed<ejeed<y)_Ruler.tokenscommandN) rrrrPatternrrUnionTuplerrrr_rYrrsJ YYs^ GGCc*AGGG,<< == ZZ_r_rceZdZdZddZdedefdZ ddedejed ejed ejede f d Z dd ejeje eefdejed ejedejefd Z ddedejed ejed ejedejeje eeff dZy)r a Class that implements a lexer for a given environment. Automatically created by the environment class, usually you don't have to do that. Note that the lexer is not automatically bound to an environment. Multiple environments can share the same lexer. rbNctj}dtdtjtfd}t t tdt ttdt ttdt ttdt ttdt t t"dg}t%|}||j&}||j(}||j*}||j,} |j.rdnd} |j0|_|j2|_|j4|_d|d|d|d } d j7| g|D cgc]\} } d | d | d c} } z}dt |d|dt9t:ddt |dt:dgt<t |d|d|d|| d t>t@fdt |dtCdfdgtDt |d|d|d|| dtFdg|ztHt |d| d| tJdg|ztLt |d|d|d|d|| d t9t:tNdt |dtCdfdgtPt |dtRdg|ztTt |dtVtXfdgi|_-ycc} } w)NrXrbcltj|tjtjzSrT)rUcompileMSr]s rYczLexer.__init__..cs::a- -r_z\n?rz(?Pz(\-|\+|)\s*raw\s*(?:\-z\s*|z))rPz(?P$    !O,D0ABC&## $^$45!!-c,@)N?*;2? #:}=ai'*H"I!KTR  &a o'>G(( $()&(=>&F2  Rs K1 rxcBtj|j|S)z`Replace all newlines with the configured sequence in strings and template data. )rr&r)rrxs rY_normalize_newlineszLexer._normalize_newlinesVs~~d33U;;r_sourcer+rstatecf|j||||}t|j|||||S)z:Calls tokeniter + tokenize and wraps it in a token stream.) tokeniterrwrap)rr'r+rr(rs rYtokenizezLexer.tokenize\s4h>499VT8| stCd|d|||| jE}||k7rtCd|d|d||||s|t<vr|||f||j#dz }|j?dddk(} |jG}||dk(r|jEn_|dk(rI|j'j9D]\}}| |j |n t;|d|j ||j |d} n||k(rt;|d|}n|| k\rytCd||d ||||w)!aThis method tokenizes the text and returns the tokens in a generator. Use this method if you just want to tokenize a template. .. versionchanged:: 3.0 Only ``\n``, ``\r\n`` and ``\r`` are treated as line breaks. Nr/r rrr )variableblockz invalid state_beginTc3&K|] }|| ywrTr)rWgs rYrZz"Lexer.tokeniter..s)SQ]!)Ssr@r?r z= wanted to resolve the token dynamically but no group matchedrIrJrGrHrErF)rJrHrFz unexpected ''z ', expected 'r zA wanted to resolve the new state dynamically but no group matchedz* yielded empty string without stack changezunexpected char z at )$rr|rrrrr\matchrlrjrn isinstancetuplegroupsrrrstripcountr groupdictrqrkrfindr  fullmatch enumeraterritems RuntimeErrorignore_if_emptygrouprrpopend)rr'r+rr(linesposrstack statetokens source_lengthbalancing_stacknewlines_stripped line_startingregexr new_statemrHtext strip_signstrippedl_posidxrsr`rxr< expected_oppos2s rYr*zLexer.tokeniters  (1-))eBi2ob 5!  &11 B? B1 LL) *jjr+ F ') ,7(vyKK,9 #v&#+2( fe,./hhjF!&.9 &ay&*)SVADqD\)S%S %,'+{{}H04S]_0E0K0KD0Q-&.%<%rsH  ++??"(4