403Webshell
Server IP : 127.0.0.1  /  Your IP : 216.73.216.109
Web Server : Apache/2.4.54 (Win64) OpenSSL/1.1.1q PHP/8.1.10
System : Windows NT DESKTOP-E5T4RUN 10.0 build 19045 (Windows 10) AMD64
User : SERVERWEB ( 0)
PHP Version : 8.1.10
Disable Function : NONE
MySQL : OFF |  cURL : ON |  WGET : OFF |  Perl : OFF |  Python : OFF |  Sudo : OFF |  Pkexec : OFF
Directory :  C:/cygwin64/lib/python3.9/site-packages/pip/_vendor/pygments/__pycache__/

Upload File :
current_dir [ Writeable] document_root [ Writeable]

 

Command :


[ Back ]     

Current File : C:/cygwin64/lib/python3.9/site-packages/pip/_vendor/pygments/__pycache__/lexer.cpython-39.pyc
a

X��e:��@s�dZddlZddlZddlZddlmZmZddlmZddl	m
Z
mZmZm
Z
mZddlmZmZmZmZmZmZddlmZgd�Ze�d	�Zgd
�Zedd��ZGd
d�de�ZGdd�ded�Z Gdd�de �Z!Gdd�de"�Z#Gdd�d�Z$e$�Z%Gdd�de&�Z'Gdd�d�Z(dd�Z)Gdd�d�Z*e*�Z+d d!�Z,Gd"d#�d#�Z-Gd$d%�d%e�Z.Gd&d'�d'e�Z/Gd(d)�d)e e/d�Z0Gd*d+�d+�Z1Gd,d-�d-e0�Z2d.d/�Z3Gd0d1�d1e/�Z4Gd2d3�d3e0e4d�Z5dS)4z�
    pygments.lexer
    ~~~~~~~~~~~~~~

    Base lexer classes.

    :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�N)�
apply_filters�Filter)�get_filter_by_name)�Error�Text�Other�
Whitespace�
_TokenType)�get_bool_opt�get_int_opt�get_list_opt�make_analysator�Future�guess_decode)�	regex_opt)
�Lexer�
RegexLexer�ExtendedRegexLexer�DelegatingLexer�LexerContext�include�inherit�bygroups�using�this�default�words�line_rez.*?
))s�utf-8)s��zutf-32)s��zutf-32be)s��zutf-16)s��zutf-16becCsdS)N����xr r �>/usr/lib/python3.9/site-packages/pip/_vendor/pygments/lexer.py�<lambda>"�r$c@seZdZdZdd�ZdS)�	LexerMetaz�
    This metaclass automagically converts ``analyse_text`` methods into
    static methods which always return float values.
    cCs(d|vrt|d�|d<t�||||�S)N�analyse_text)r
�type�__new__)Zmcs�name�bases�dr r r#r)+szLexerMeta.__new__N)�__name__�
__module__�__qualname__�__doc__r)r r r r#r&%sr&c@s^eZdZdZdZgZgZgZgZdZ	dZ
dd�Zdd�Zdd	�Z
d
d�Zdd
d�Zdd�ZdS)rau
    Lexer for a specific language.

    See also :doc:`lexerdevelopment`, a high-level guide to writing
    lexers.

    Lexer classes have attributes used for choosing the most appropriate
    lexer based on various criteria.

    .. autoattribute:: name
       :no-value:
    .. autoattribute:: aliases
       :no-value:
    .. autoattribute:: filenames
       :no-value:
    .. autoattribute:: alias_filenames
    .. autoattribute:: mimetypes
       :no-value:
    .. autoattribute:: priority

    Lexers included in Pygments should have an additional attribute:

    .. autoattribute:: url
       :no-value:

    You can pass options to the constructor. The basic options recognized
    by all lexers and processed by the base `Lexer` class are:

    ``stripnl``
        Strip leading and trailing newlines from the input (default: True).
    ``stripall``
        Strip all leading and trailing whitespace from the input
        (default: False).
    ``ensurenl``
        Make sure that the input ends with a newline (default: True).  This
        is required for some lexers that consume input linewise.

        .. versionadded:: 1.3

    ``tabsize``
        If given and greater than 0, expand tabs in the input (default: 0).
    ``encoding``
        If given, must be an encoding name. This encoding will be used to
        convert the input string to Unicode, if it is not already a Unicode
        string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
        Latin1 detection.  Can also be ``'chardet'`` to use the chardet
        library, if it is installed.
    ``inencoding``
        Overrides the ``encoding`` if given.
    NrcKs�||_t|dd�|_t|dd�|_t|dd�|_t|dd�|_|�dd	�|_|�d
�pZ|j|_g|_	t
|dd�D]}|�|�qpd
S)a�
        This constructor takes arbitrary options as keyword arguments.
        Every subclass must first process its own options and then call
        the `Lexer` constructor, since it processes the basic
        options like `stripnl`.

        An example looks like this:

        .. sourcecode:: python

           def __init__(self, **options):
               self.compress = options.get('compress', '')
               Lexer.__init__(self, **options)

        As these options must all be specifiable as strings (due to the
        command line usage), there are various utility functions
        available to help with that, see `Utilities`_.
        �stripnlT�stripallF�ensurenl�tabsizer�encoding�guessZ
inencoding�filtersr N)�optionsr
r1r2r3rr4�getr5r7r�
add_filter)�selfr8�filter_r r r#�__init__�szLexer.__init__cCs(|jrd|jj|jfSd|jjSdS)Nz<pygments.lexers.%s with %r>z<pygments.lexers.%s>)r8�	__class__r-�r;r r r#�__repr__�s
�zLexer.__repr__cKs*t|t�st|fi|��}|j�|�dS)z8
        Add a new stream filter to this lexer.
        N)�
isinstancerrr7�append)r;r<r8r r r#r:�s
zLexer.add_filtercCsdS)a�
        A static method which is called for lexer guessing.

        It should analyse the text and return a float in the range
        from ``0.0`` to ``1.0``.  If it returns ``0.0``, the lexer
        will not be selected as the most probable one, if it returns
        ``1.0``, it will be selected immediately.  This is used by
        `guess_lexer`.

        The `LexerMeta` metaclass automatically wraps this function so
        that it works like a static method (no ``self`` or ``cls``
        parameter) and the return value is automatically converted to
        `float`. If the return value is an object that is boolean `False`
        it's the same as if the return values was ``0.0``.
        Nr )�textr r r#r'�szLexer.analyse_textFc
s�t�t��s�jdkr$t��\�}n�jdkr�zddlm}Wn.tyl}ztd�|�WYd}~n
d}~00d}tD].\}}��|�rv�t	|�d��
|d�}q�qv|dur�|��dd��}	��
|	�d	�p�d
d�}|�n(��
�j����d��r"�t	d�d��n��d��r"�t	d�d����
dd
����
dd
���j�rL����n�j�r^��d
���jdk�rv���j���j�r���d
��s��d
7���fdd�}
|
�}|�s�t|�j��}|S)ae
        This method is the basic interface of a lexer. It is called by
        the `highlight()` function. It must process the text and return an
        iterable of ``(tokentype, value)`` pairs from `text`.

        Normally, you don't need to override this method. The default
        implementation processes the options recognized by all lexers
        (`stripnl`, `stripall` and so on), and then yields all tokens
        from `get_tokens_unprocessed()`, with the ``index`` dropped.

        If `unfiltered` is set to `True`, the filtering mechanism is
        bypassed even if filters are defined.
        r6�chardetr)rDzkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/N�replaceir5ruz
�
�
c3s$����D]\}}}||fVq
dS�N)�get_tokens_unprocessed)�_�t�v�r;rCr r#�streamer�sz"Lexer.get_tokens.<locals>.streamer)rA�strr5rZpip._vendorrD�ImportError�
_encoding_map�
startswith�len�decodeZdetectr9rEr2�stripr1r4�
expandtabsr3�endswithrr7)r;rCZ
unfilteredrJrD�eZdecodedZbomr5�encrN�streamr rMr#�
get_tokens�sR

�
�

zLexer.get_tokenscCst�dS)aS
        This method should process the text and return an iterable of
        ``(index, tokentype, value)`` tuples where ``index`` is the starting
        position of the token within the input text.

        It must be overridden by subclasses. It is recommended to
        implement it as a generator to maximize effectiveness.
        N)�NotImplementedErrorrMr r r#rIs	zLexer.get_tokens_unprocessed)F)r-r.r/r0r*�aliases�	filenamesZalias_filenamesZ	mimetypes�priority�urlr=r@r:r'r[rIr r r r#r1s4
@r)�	metaclassc@s$eZdZdZefdd�Zdd�ZdS)ra 
    This lexer takes two lexer as arguments. A root lexer and
    a language lexer. First everything is scanned using the language
    lexer, afterwards all ``Other`` tokens are lexed using the root
    lexer.

    The lexers from the ``template`` lexer package use this base lexer.
    cKs<|fi|��|_|fi|��|_||_tj|fi|��dSrH)�
root_lexer�language_lexer�needlerr=)r;Z_root_lexerZ_language_lexerZ_needler8r r r#r=szDelegatingLexer.__init__cCs�d}g}g}|j�|�D]H\}}}||jurP|rF|�t|�|f�g}||7}q|�|||f�q|rx|�t|�|f�t||j�|��S)N�)rcrIrdrBrS�
do_insertionsrb)r;rCZbuffered�
insertionsZ
lng_buffer�irKrLr r r#rIs


�z&DelegatingLexer.get_tokens_unprocessedN)r-r.r/r0rr=rIr r r r#r
s	rc@seZdZdZdS)rzI
    Indicates that a state should include rules from another state.
    N�r-r.r/r0r r r r#r4src@seZdZdZdd�ZdS)�_inheritzC
    Indicates the a state should inherit from its superclass.
    cCsdS)Nrr r?r r r#r@?sz_inherit.__repr__N)r-r.r/r0r@r r r r#rj;srjc@s eZdZdZdd�Zdd�ZdS)�combinedz:
    Indicates a state combined from multiple states.
    cGst�||�SrH)�tupler))�cls�argsr r r#r)Jszcombined.__new__cGsdSrHr )r;rnr r r#r=Mszcombined.__init__N)r-r.r/r0r)r=r r r r#rkEsrkc@sFeZdZdZdd�Zddd�Zddd�Zdd	d
�Zdd�Zd
d�Z	dS)�_PseudoMatchz:
    A pseudo match object constructed from a string.
    cCs||_||_dSrH)�_text�_start)r;�startrCr r r#r=Wsz_PseudoMatch.__init__NcCs|jSrH)rq�r;�argr r r#rr[sz_PseudoMatch.startcCs|jt|j�SrH)rqrSrprsr r r#�end^sz_PseudoMatch.endcCs|rtd��|jS)Nz
No such group)�
IndexErrorrprsr r r#�groupasz_PseudoMatch.groupcCs|jfSrH)rpr?r r r#�groupsfsz_PseudoMatch.groupscCsiSrHr r?r r r#�	groupdictisz_PseudoMatch.groupdict)N)N)N)
r-r.r/r0r=rrrurwrxryr r r r#roRs


rocsd�fdd�	}|S)zL
    Callback that yields multiple actions for each group in the match.
    Nc3s�t��D]�\}}|durqqt|�turR|�|d�}|r�|�|d�||fVq|�|d�}|dur|r||�|d�|_||t|�|d�|�|�D]}|r�|Vq�q|r�|��|_dS)N�)�	enumerater(r	rwrr�posroru)�lexer�match�ctxrh�action�data�item�rnr r#�callbackqs$�
zbygroups.<locals>.callback)Nr )rnr�r r�r#rmsrc@seZdZdZdS)�_ThiszX
    Special singleton used for indicating the caller class.
    Used by ``using``.
    Nrir r r r#r��sr�csji�d�vr:��d�}t|ttf�r.|�d<nd|f�d<�turTd��fdd�	}nd	���fdd�	}|S)
a�
    Callback that processes the match with a different lexer.

    The keyword arguments are forwarded to the lexer, except `state` which
    is handled separately.

    `state` specifies the state that the new lexer will start in, and can
    be an enumerable such as ('root', 'inline', 'string') or a simple
    string which is assumed to be on top of the root state.

    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
    �state�stack�rootNc3sr�r"��|j�|jfi���}n|}|��}|j|��fi���D]\}}}||||fVqD|rn|��|_dSrH)�updater8r>rrrIrwrur|�r}r~rZlx�srhrKrL)�	gt_kwargs�kwargsr r#r��s zusing.<locals>.callbackc3sf��|j��fi���}|��}|j|��fi���D]\}}}||||fVq8|rb|��|_dSrH)r�r8rrrIrwrur|r���_otherr�r�r r#r��s )N)N)�poprA�listrlr)r�r�r�r�r r�r#r�s



rc@seZdZdZdd�ZdS)rz�
    Indicates a state or state action (e.g. #pop) to apply.
    For example default('#pop') is equivalent to ('', Token, '#pop')
    Note that state tuples may be used as well.

    .. versionadded:: 2.0
    cCs
||_dSrH)r�)r;r�r r r#r=�szdefault.__init__N)r-r.r/r0r=r r r r#r�src@s"eZdZdZddd�Zdd�ZdS)	rz�
    Indicates a list of literal words that is transformed into an optimized
    regex that matches any of the words.

    .. versionadded:: 2.0
    recCs||_||_||_dSrH)r�prefix�suffix)r;rr�r�r r r#r=�szwords.__init__cCst|j|j|jd�S)N�r�r�)rrr�r�r?r r r#r9�sz	words.getN)rere)r-r.r/r0r=r9r r r r#r�s
rc@sJeZdZdZdd�Zdd�Zdd�Zdd	�Zddd�Zd
d�Z	dd�Z
d
S)�RegexLexerMetazw
    Metaclass for RegexLexer, creates the self._tokens attribute from
    self.tokens on the first instantiation.
    cCs t|t�r|��}t�||�jS)zBPreprocess the regular expression component of a token definition.)rArr9�re�compiler~)rm�regex�rflagsr�r r r#�_process_regex�s
zRegexLexerMeta._process_regexcCs&t|�tus"t|�s"Jd|f��|S)z5Preprocess the token component of a token definition.z2token type must be simple type or callable, not %r)r(r	�callable)rm�tokenr r r#�_process_token�s�zRegexLexerMeta._process_tokencCst|t�rd|dkrdS||vr$|fS|dkr0|S|dd�dkrRt|dd��SdsbJd|��n�t|t�r�d	|j}|jd
7_g}|D],}||ks�Jd|��|�|�|||��q�|||<|fSt|t��r|D] }||vs�|dvs�Jd
|��q�|Sd�sJd|��dS)z=Preprocess the state transition action of a token definition.�#pop����#pushN�z#pop:Fzunknown new state %rz_tmp_%drzzcircular state ref %r)r�r�zunknown new state zunknown new state def %r)rArO�intrk�_tmpname�extend�_process_staterl)rm�	new_state�unprocessed�	processedZ	tmp_state�itokensZistater r r#�_process_new_state�s<



���z!RegexLexerMeta._process_new_statecCs�t|�tusJd|��|ddks0Jd|��||vr@||Sg}||<|j}||D�]0}t|t�r�||ks~Jd|��|�|�||t|���qZt|t�r�qZt|t�r�|�	|j
||�}|�t�
d�jd|f�qZt|�tus�Jd|��z|�|d||�}Wn@t�yF}	z&td	|d|||	f�|	�WYd}	~	n
d}	~	00|�|d
�}
t|�dk�rjd}n|�	|d||�}|�||
|f�qZ|S)z%Preprocess a single state definition.zwrong state name %rr�#zinvalid state name %rzcircular state reference %rreNzwrong rule def %rz+uncompilable regex %r in state %r of %r: %srz�)r(rO�flagsrArr�r�rjrr�r�rBr�r�r~rlr��	Exception�
ValueErrorr�rS)rmr�r�r��tokensr�Ztdefr��rex�errr�r r r#r�sH
�

��
�zRegexLexerMeta._process_stateNcCs<i}|j|<|p|j|}t|�D]}|�|||�q$|S)z-Preprocess a dictionary of token definitions.)�_all_tokensr�r�r�)rmr*�	tokendefsr�r�r r r#�process_tokendef?s
zRegexLexerMeta.process_tokendefc

Cs�i}i}|jD]�}|j�di�}|��D]�\}}|�|�}|durz|||<z|�t�}WntynYq(Yn0|||<q(|�|d�}|dur�q(||||d�<z|�t�}	Wnty�Yq(0||	||<q(q|S)a
        Merge tokens from superclasses in MRO order, returning a single tokendef
        dictionary.

        Any state that is not defined by a subclass will be inherited
        automatically.  States that *are* defined by subclasses will, by
        default, override that state in the superclass.  If a subclass wishes to
        inherit definitions from a superclass, it can use the special value
        "inherit", which will cause the superclass' state definition to be
        included at that point in the state.
        r�Nrz)�__mro__�__dict__r9�items�indexrr�r�)
rmr�Zinheritable�cZtoksr�r�ZcuritemsZinherit_ndxZnew_inh_ndxr r r#�
get_tokendefsGs0


zRegexLexerMeta.get_tokendefscOsRd|jvr:i|_d|_t|d�r(|jr(n|�d|���|_tj	|g|�Ri|��S)z:Instantiate cls after preprocessing its token definitions.�_tokensr�token_variantsre)
r�r�r��hasattrr�r�r�r�r(�__call__)rmrn�kwdsr r r#r�xs
zRegexLexerMeta.__call__)N)r-r.r/r0r�r�r�r�r�r�r�r r r r#r��s#,
1r�c@s$eZdZdZejZiZddd�ZdS)rz�
    Base for simple stateful regular expression-based lexers.
    Simplifies the lexing process so that you need only
    provide a list of states and regular expressions.
    �r�ccs�d}|j}t|�}||d}|D�](\}}}	|||�}
|
r"|durrt|�turb|||
��fVn|||
�EdH|
��}|	du�rHt|	t�r�|	D]D}|dkr�t|�dkr�|�	�q�|dkr�|�
|d�q�|�
|�q�nbt|	t��rt|	�t|�k�r|dd�=n
||	d�=n,|	dk�r*|�
|d�nd�s<Jd|	��||d}qq"zP||d	k�r�d
g}|d
}|t
d	fV|d7}Wq|t||fV|d7}Wqt�y�Y�q�Yq0qdS)z~
        Split ``text`` into (tokentype, text) pairs.

        ``stack`` is the initial stack (default: ``['root']``)
        rr�Nr�rzr�F�wrong state def: %rrFr�)r�r�r(r	rwrurArlrSr�rBr��absrrrv)r;rCr�r|r�Z
statestack�statetokens�rexmatchr�r��mr�r r r#rI�sR




z!RegexLexer.get_tokens_unprocessedN)r�)	r-r.r/r0r��	MULTILINEr�r�rIr r r r#r�src@s"eZdZdZddd�Zdd�ZdS)rz9
    A helper object that holds lexer position data.
    NcCs*||_||_|pt|�|_|p"dg|_dS)Nr�)rCr|rSrur�)r;rCr|r�rur r r#r=�szLexerContext.__init__cCsd|j|j|jfS)NzLexerContext(%r, %r, %r))rCr|r�r?r r r#r@�s�zLexerContext.__repr__)NN)r-r.r/r0r=r@r r r r#r�s
rc@seZdZdZddd�ZdS)rzE
    A RegexLexer that uses a context object to store its state.
    Nccs:|j}|st|d�}|d}n|}||jd}|j}|D�]`\}}}|||j|j�}	|	r:|dur�t|�tur�|j||	��fV|	��|_n$|||	|�EdH|s�||jd}|du�r�t	|t
��r|D]P}
|
dkr�t|j�dkr�|j��q�|
dk�r|j�
|jd�q�|j�
|
�q�nlt	|t��rZt|�t|j�k�rL|jdd�=n|j|d�=n0|dk�rx|j�
|jd�nd�s�Jd	|��||jd}q6q:zz|j|jk�r�W�q6||jd
k�r�dg|_|d}|jtd
fV|jd7_Wq6|jt||jfV|jd7_Wq6t�y2Y�q6Yq60q6dS)z
        Split ``text`` into (tokentype, text) pairs.
        If ``context`` is given, use this lexer context instead.
        rr�r�Nr�rzr�Fr�rF)r�rr�rCr|rur(r	rwrArlrSr�rBr�r�rrrv)r;rC�contextr�rr�r�r�r�r�r�r r r#rI�s`




z)ExtendedRegexLexer.get_tokens_unprocessed)NN)r-r.r/r0rIr r r r#r�src	cs�t|�}zt|�\}}Wnty6|EdHYdS0d}d}|D]�\}}}|durZ|}d}	|�r|t|�|k�r||	||�}
|
r�|||
fV|t|
�7}|D]"\}}}
|||
fV|t|
�7}q�||}	zt|�\}}Wq^t�yd}Y�qYq^0q^|	t|�krD||||	d�fV|t|�|	7}qD|�r�|�pDd}|D]$\}}}|||fV|t|�7}�qJzt|�\}}Wnt�y�d}Y�q�Yn0�q6dS)ag
    Helper for lexers which must combine the results of several
    sublexers.

    ``insertions`` is a list of ``(index, itokens)`` pairs.
    Each ``itokens`` iterable should be inserted at position
    ``index`` into the token stream given by the ``tokens``
    argument.

    The result is a combined token stream.

    TODO: clean up the code here.
    NTrF)�iter�next�
StopIterationrS)rgr�r�r�ZrealposZinsleftrhrKrLZoldiZtmpvalZit_indexZit_tokenZit_value�pr r r#rf?sN

rfc@seZdZdZdd�ZdS)�ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.csLt|t�r t|j|j|jd��n|�t��|��tjf����fdd�	}|S)Nr�cs`�jd���fddg�}t��}��|||�}t��}|dd7<|d||7<|S)Nr�rrrz)�
_prof_data�
setdefault�timer~)rCr|�endpos�infoZt0�res�t1�rmZcompiledr�r�r r#�
match_func�sz:ProfilingRegexLexerMeta._process_regex.<locals>.match_func)	rArrr�r�r�r��sys�maxsize)rmr�r�r�r�r r�r#r��s

�z&ProfilingRegexLexerMeta._process_regexN)r-r.r/r0r�r r r r#r�sr�c@s"eZdZdZgZdZddd�ZdS)�ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.�r�c#s��jj�i�t��||�EdH�jj��}tdd�|��D��fdd�dd�}tdd�|D��}t	�t	d�jj
t|�|f�t	d	�t	d
d�t	d�|D]}t	d
|�q�t	d	�dS)NcssN|]F\\}}\}}|t|��d��dd�dd�|d|d||fVqdS)zu'z\\�\N�Ai�)�reprrUrE)�.0r��r�nrKr r r#�	<genexpr>�s��z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>cs
|�jSrH)�_prof_sort_indexr!r?r r#r$�r%z<ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda>T)�key�reversecss|]}|dVqdS)�Nr )r�r"r r r#r��r%z2Profiling result for %s lexing %d chars in %.3f mszn==============================================================================================================z$%-20s %-64s ncalls  tottime  percall)r�r�zn--------------------------------------------------------------------------------------------------------------z%-20s %-65s %5d %8.4f %8.4f)r>r�rBrrIr��sortedr��sum�printr-rS)r;rCr�Zrawdatar�Z	sum_totalr,r r?r#rI�s(�
��z*ProfilingRegexLexer.get_tokens_unprocessedN)r�)r-r.r/r0r�r�rIr r r r#r��sr�)6r0r�r�r�Zpip._vendor.pygments.filterrrZpip._vendor.pygments.filtersrZpip._vendor.pygments.tokenrrrrr	Zpip._vendor.pygments.utilr
rrr
rrZpip._vendor.pygments.regexoptr�__all__r�rrQ�staticmethodZ_default_analyser(r&rrrOrrjrrlrkrorr�rrrrr�rrrrfr�r�r r r r#�<module>sF
 
]'
2)aH@

Youez - 2016 - github.com/yon3zu
LinuXploit