403Webshell
Server IP : 127.0.0.1  /  Your IP : 216.73.216.109
Web Server : Apache/2.4.54 (Win64) OpenSSL/1.1.1q PHP/8.1.10
System : Windows NT DESKTOP-E5T4RUN 10.0 build 19045 (Windows 10) AMD64
User : SERVERWEB ( 0)
PHP Version : 8.1.10
Disable Function : NONE
MySQL : OFF |  cURL : ON |  WGET : OFF |  Perl : OFF |  Python : OFF |  Sudo : OFF |  Pkexec : OFF
Directory :  C:/cygwin64/lib/python3.9/site-packages/pip/_internal/index/__pycache__/

Upload File :
current_dir [ Writeable] document_root [ Writeable]

 

Command :


[ Back ]     

Current File : C:/cygwin64/lib/python3.9/site-packages/pip/_internal/index/__pycache__/collector.cpython-39.pyc
a

W��e�@�@s�dZddlZddlZddlZddlZddlZddlZddlZddl	Z
ddlZ
ddlm
Z
ddlmZddlmZmZmZmZmZmZmZmZmZmZmZddlmZddlmZddl m!Z!m"Z"dd	l#m$Z$dd
l%m&Z&ddl'm(Z(ddl)m*Z*dd
l+m,Z,ddl-m.Z.ddl/m0Z0ddl1m2Z2ddl3m4Z4m5Z5m6Z6e�rHddlm7Z7ne8Z7e�9e:�Z;ee<e<fZ=e<ee<d�dd�Z>Gdd�de?�Z@edd�dd�ZAGdd�de?�ZBe<e*dd�dd �ZCe<e*ed�d!d"�ZDe=ee<d#�d$d%�ZEGd&d'�d'�ZFGd(d)�d)e7�ZGeGeGd*�d+d,�ZHeHd-ee&d.�d/d0��ZIGd1d-�d-�ZJGd2d3�d3e
�ZKdCe&ee<e?feed4dd5�d6d7�ZLdDeeMeJd9�d:d;�ZNe&e*ed-d<�d=d>�ZOGd?d@�d@e�ZPGdAdB�dB�ZQdS)EzO
The main purpose of this module is to expose LinkCollector.collect_sources().
�N)�
HTMLParser)�Values)�
TYPE_CHECKING�Callable�Dict�Iterable�List�MutableMapping�
NamedTuple�Optional�Sequence�Tuple�Union)�requests)�Response)�
RetryError�SSLError)�NetworkConnectionError)�Link)�SearchScope)�
PipSession)�raise_for_status)�is_archive_file��redact_auth_from_url)�vcs�)�CandidatesFromPage�
LinkSource�build_source)�Protocol��url�returncCs6tjD]*}|���|�r|t|�dvr|SqdS)zgLook for VCS schemes in the URL.

    Returns the matched VCS scheme, or None if there's no match.
    z+:N)rZschemes�lower�
startswith�len)r"�scheme�r(�A/usr/lib/python3.9/site-packages/pip/_internal/index/collector.py�_match_vcs_scheme7s

r*cs&eZdZeedd��fdd�Z�ZS)�_NotAPIContentN)�content_type�request_descr#cst��||�||_||_dS�N)�super�__init__r,r-)�selfr,r-��	__class__r(r)r0Csz_NotAPIContent.__init__)�__name__�
__module__�__qualname__�strr0�
__classcell__r(r(r2r)r+Bsr+)�responser#cCs6|j�dd�}|��}|�d�r$dSt||jj��dS)z�
    Check the Content-Type header to ensure the response contains a Simple
    API Response.

    Raises `_NotAPIContent` if the content type is not a valid content-type.
    �Content-Type�Unknown)z	text/htmlz#application/vnd.pypi.simple.v1+html�#application/vnd.pypi.simple.v1+jsonN)�headers�getr$r%r+�request�method)r9r,�content_type_lr(r(r)�_ensure_api_headerIs�rBc@seZdZdS)�_NotHTTPN)r4r5r6r(r(r(r)rC_srC)r"�sessionr#cCsFtj�|�\}}}}}|dvr$t��|j|dd�}t|�t|�dS)z�
    Send a HEAD request to the URL, and ensure the response contains a simple
    API Response.

    Raises `_NotHTTP` if the URL is not available for a HEAD request, or
    `_NotAPIContent` if the content type is not a valid content type.
    >�http�httpsT)Zallow_redirectsN)�urllib�parse�urlsplitrC�headrrB)r"rDr'�netloc�path�query�fragment�respr(r(r)�_ensure_api_responsecsrPcCsxtt|�j�rt||d�t�dt|��|j|d�gd��dd�d�}t	|�t
|�t�dt|�|j�d	d
��|S)aYAccess an Simple API response with GET, and return the response.

    This consists of three parts:

    1. If the URL looks suspiciously like an archive, send a HEAD first to
       check the Content-Type is HTML or Simple API, to avoid downloading a
       large file. Raise `_NotHTTP` if the content type cannot be determined, or
       `_NotAPIContent` if it is not HTML or a Simple API.
    2. Actually perform the request. Raise HTTP exceptions on network failures.
    3. Check the Content-Type header to make sure we got a Simple API response,
       and raise `_NotAPIContent` otherwise.
    �rDzGetting page %sz, )r<z*application/vnd.pypi.simple.v1+html; q=0.1ztext/html; q=0.01z	max-age=0)ZAcceptz
Cache-Control)r=zFetched page %s as %sr:r;)rr�filenamerP�logger�debugrr>�joinrrBr=)r"rDrOr(r(r)�_get_simple_responseus&
����rV)r=r#cCs<|r8d|vr8tj��}|d|d<|�d�}|r8t|�SdS)z=Determine if we have any encoding information in our headers.r:zcontent-type�charsetN)�email�messageZMessageZ	get_paramr7)r=�mrWr(r(r)�_get_encoding_from_headers�s

r[c@s:eZdZddd�dd�Zeed�dd�Zed	�d
d�ZdS)�CacheablePageContent�IndexContentN��pager#cCs|js
J�||_dSr.)�cache_link_parsingr_�r1r_r(r(r)r0�s
zCacheablePageContent.__init__)�otherr#cCst|t|��o|jj|jjkSr.)�
isinstance�typer_r")r1rbr(r(r)�__eq__�szCacheablePageContent.__eq__�r#cCst|jj�Sr.)�hashr_r"�r1r(r(r)�__hash__�szCacheablePageContent.__hash__)	r4r5r6r0�object�boolre�intrir(r(r(r)r\�sr\c@s eZdZdeed�dd�ZdS)�
ParseLinksr]r^cCsdSr.r(rar(r(r)�__call__�szParseLinks.__call__N)r4r5r6rrrnr(r(r(r)rm�srm)�fnr#csLtjdd�tttd��fdd���t���dttd���fdd	��}|S)
z�
    Given a function that parses an Iterable[Link] from an IndexContent, cache the
    function's result (keyed by CacheablePageContent), unless the IndexContent
    `page` has `page.cache_link_parsing == False`.
    N)�maxsize)�cacheable_pager#cst�|j��Sr.)�listr_)rq)ror(r)�wrapper�sz*with_cached_index_content.<locals>.wrapperr]r^cs|jr�t|��St�|��Sr.)r`r\rr)r_�rorsr(r)�wrapper_wrapper�sz2with_cached_index_content.<locals>.wrapper_wrapper)�	functools�	lru_cacher\rr�wraps)rorur(rtr)�with_cached_index_content�s

ryr]r^c
cs�|j��}|�d�rTt�|j�}|�dg�D]"}t�||j	�}|durHq,|Vq,dSt
|j	�}|jpfd}|�|j�
|��|j	}|jp�|}|jD]$}	tj|	||d�}|dur�q�|Vq�dS)z\
    Parse a Simple API's Index Content, and yield its anchor elements as Link objects.
    r<�filesNzutf-8)Zpage_url�base_url)r,r$r%�json�loads�contentr>rZ	from_jsonr"�HTMLLinkParser�encodingZfeed�decoder{�anchorsZfrom_element)
r_rA�data�file�link�parserr�r"r{�anchorr(r(r)�parse_links�s&





r�c@s<eZdZdZd
eeeeeedd�dd�Zed�dd	�Z	dS)r]z5Represents one response (or page), along with its URLTN)r~r,r�r"r`r#cCs"||_||_||_||_||_dS)am
        :param encoding: the encoding to decode the given content.
        :param url: the URL from which the HTML was downloaded.
        :param cache_link_parsing: whether links parsed from this page's url
                                   should be cached. PyPI index urls should
                                   have this set to False, for example.
        N)r~r,r�r"r`)r1r~r,r�r"r`r(r(r)r0s
zIndexContent.__init__rfcCs
t|j�Sr.)rr"rhr(r(r)�__str__szIndexContent.__str__)T)
r4r5r6�__doc__�bytesr7rrkr0r�r(r(r(r)r]s��csneZdZdZedd��fdd�Zeeeeeefdd�dd�Z	eeeeefeed	�d
d�Z
�ZS)rzf
    HTMLParser that keeps the first base HREF and a list of all anchor
    elements' attributes.
    Nr!cs$t�jdd�||_d|_g|_dS)NT)Zconvert_charrefs)r/r0r"r{r�)r1r"r2r(r)r0#szHTMLLinkParser.__init__)�tag�attrsr#cCsH|dkr,|jdur,|�|�}|durD||_n|dkrD|j�t|��dS)N�base�a)r{�get_hrefr��append�dict)r1r�r��hrefr(r(r)�handle_starttag*s
zHTMLLinkParser.handle_starttag)r�r#cCs"|D]\}}|dkr|SqdS)Nr�r()r1r��name�valuer(r(r)r�2s
zHTMLLinkParser.get_href)r4r5r6r�r7r0rr
rr�r�r8r(r(r2r)rs"r).N)r��reason�methr#cCs|durtj}|d||�dS)Nz%Could not fetch URL %s: %s - skipping)rSrT)r�r�r�r(r(r)�_handle_get_simple_fail9sr�T)r9r`r#cCs&t|j�}t|j|jd||j|d�S)Nr:)r�r"r`)r[r=r]r~r")r9r`r�r(r(r)�_make_index_contentCs
�r�)r�rDr#c

Cs|j�dd�d}t|�}|r0t�d||�dStj�|�\}}}}}}|dkr�tj	�
tj�|��r�|�
d�sv|d7}tj�|d�}t�d|�zt||d	�}W�nLty�t�d
|�Y�n<ty�}z"t�d||j|j�WYd}~�nd}~0t�y(}zt||�WYd}~n�d}~0t�yV}zt||�WYd}~n�d}~0t�y�}z,d}	|	t|�7}	t||	tjd
�WYd}~nld}~0tj�y�}zt|d|���WYd}~n6d}~0tj�y�t|d�Yn0t||jd�SdS)N�#rrzICannot look at %s URL %s because it does not support lookup as web pages.r��/z
index.htmlz# file: URL is directory, getting %srQz`Skipping page %s because it looks like an archive, and cannot be checked by a HTTP HEAD request.z�Skipping page %s because the %s request got Content-Type: %s. The only supported Content-Types are application/vnd.pypi.simple.v1+json, application/vnd.pypi.simple.v1+html, and text/htmlz4There was a problem confirming the ssl certificate: )r�zconnection error: z	timed out)r`) r"�splitr*rSZwarningrGrH�urlparse�osrL�isdirr?Zurl2pathname�endswith�urljoinrTrVrCr+r-r,rr�rrr7�infor�ConnectionErrorZTimeoutr�r`)
r�rDr"Z
vcs_schemer'�_rLrO�excr�r(r(r)�_get_index_contentPsV�
�
�$$r�c@s.eZdZUeeeed<eeeed<dS)�CollectedSources�
find_links�
index_urlsN)r4r5r6rrr�__annotations__r(r(r(r)r��s
r�c@sxeZdZdZeedd�dd�Zedeee	dd�dd	��Z
eee
d
�dd��Zeeed
�dd�Ze
eed�dd�ZdS)�
LinkCollectorz�
    Responsible for collecting Link objects from all configured locations,
    making network requests as needed.

    The class's main method is its collect_sources() method.
    N)rD�search_scoper#cCs||_||_dSr.)r�rD)r1rDr�r(r(r)r0�szLinkCollector.__init__F)rD�options�suppress_no_indexr#cCsd|jg|j}|jr8|s8t�dd�dd�|D���g}|jp@g}tj|||jd�}t	||d�}|S)z�
        :param session: The Session to use to make requests.
        :param suppress_no_index: Whether to ignore the --no-index option
            when constructing the SearchScope object.
        zIgnoring indexes: %s�,css|]}t|�VqdSr.r)�.0r"r(r(r)�	<genexpr>��z'LinkCollector.create.<locals>.<genexpr>)r�r��no_index)rDr�)
Z	index_urlZextra_index_urlsr�rSrTrUr�r�creater�)�clsrDr�r�r�r�r�Zlink_collectorr(r(r)r��s$
�
��zLinkCollector.createrfcCs|jjSr.)r�r�rhr(r(r)r��szLinkCollector.find_links)�locationr#cCst||jd�S)z>
        Fetch an HTML page containing package links.
        rQ)r�rD)r1r�r(r(r)�fetch_response�szLinkCollector.fetch_response)�project_name�candidates_from_pager#cs�t����fdd��j���D����}t����fdd��jD����}t�tj	�r�dd�t
�||�D�}t|��d��d�g|}t�
d�|��tt|�t|�d	�S)
Nc	3s&|]}t|��jjdd�d�VqdS)F�r�Zpage_validatorZ
expand_dirr`r�N�rrDZis_secure_origin�r��loc�r�r�r1r(r)r��s	��z0LinkCollector.collect_sources.<locals>.<genexpr>c	3s&|]}t|��jjdd�d�VqdS)Tr�Nr�r�r�r(r)r��s	��cSs*g|]"}|dur|jdurd|j���qS)Nz* )r�)r��sr(r(r)�
<listcomp>�s�z1LinkCollector.collect_sources.<locals>.<listcomp>z' location(s) to search for versions of �:�
)r�r�)�collections�OrderedDictr�Zget_index_urls_locations�valuesr�rSZisEnabledFor�logging�DEBUG�	itertools�chainr&rTrUr�rr)r1r�r�Zindex_url_sourcesZfind_links_sources�linesr(r�r)�collect_sources�s*	
�	�
�
����zLinkCollector.collect_sources)F)r4r5r6r�rrr0�classmethodrrkr��propertyrr7r�rrr]r�rr�r�r(r(r(r)r��s(	���!�r�)N)T)Rr�r�Z
email.messagerXrvr�r|r�r��urllib.parserGZurllib.requestZhtml.parserrZoptparser�typingrrrrrr	r
rrr
rZpip._vendorrZpip._vendor.requestsrZpip._vendor.requests.exceptionsrrZpip._internal.exceptionsrZpip._internal.models.linkrZ!pip._internal.models.search_scoperZpip._internal.network.sessionrZpip._internal.network.utilsrZpip._internal.utils.filetypesrZpip._internal.utils.miscrZpip._internal.vcsrZsourcesrrrr rjZ	getLoggerr4rSr7ZResponseHeadersr*�	Exceptionr+rBrCrPrVr[r\rmryr�r]rr�rkr�r�r�r�r(r(r(r)�<module>st4
?�

���
=

Youez - 2016 - github.com/yon3zu
LinuXploit