Compare commits
No commits in common. "master" and "download-server" have entirely different histories.
master
...
download-s
933 changed files with 261 additions and 157390 deletions
63
.github/ISSUE_TEMPLATE/1_broken_site.md
vendored
63
.github/ISSUE_TEMPLATE/1_broken_site.md
vendored
|
@ -1,63 +0,0 @@
|
||||||
---
|
|
||||||
name: Broken site support
|
|
||||||
about: Report broken or misfunctioning site
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.11.12. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
|
||||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a broken site support
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2020.11.12**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
|
||||||
- [ ] I've searched the bugtracker for similar issues including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Verbose log
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
|
|
||||||
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
|
|
||||||
[debug] System config: []
|
|
||||||
[debug] User config: []
|
|
||||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
|
||||||
[debug] youtube-dl version 2020.11.12
|
|
||||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
|
||||||
[debug] Proxy map: {}
|
|
||||||
<more lines>
|
|
||||||
-->
|
|
||||||
|
|
||||||
```
|
|
||||||
PASTE VERBOSE LOG HERE
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
54
.github/ISSUE_TEMPLATE/2_site_support_request.md
vendored
54
.github/ISSUE_TEMPLATE/2_site_support_request.md
vendored
|
@ -1,54 +0,0 @@
|
||||||
---
|
|
||||||
name: Site support request
|
|
||||||
about: Request support for a new site
|
|
||||||
title: ''
|
|
||||||
labels: 'site-support-request'
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.11.12. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
|
|
||||||
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a new site support request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2020.11.12**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that none of provided URLs violate any copyrights
|
|
||||||
- [ ] I've searched the bugtracker for similar site support requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Example URLs
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
|
|
||||||
-->
|
|
||||||
|
|
||||||
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
|
|
||||||
- Single video: https://youtu.be/BaW_jenozKc
|
|
||||||
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide any additional information.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
37
.github/ISSUE_TEMPLATE/3_site_feature_request.md
vendored
37
.github/ISSUE_TEMPLATE/3_site_feature_request.md
vendored
|
@ -1,37 +0,0 @@
|
||||||
---
|
|
||||||
name: Site feature request
|
|
||||||
about: Request a new functionality for a site
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.11.12. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Search the bugtracker for similar site feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a site feature request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2020.11.12**
|
|
||||||
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
65
.github/ISSUE_TEMPLATE/4_bug_report.md
vendored
65
.github/ISSUE_TEMPLATE/4_bug_report.md
vendored
|
@ -1,65 +0,0 @@
|
||||||
---
|
|
||||||
name: Bug report
|
|
||||||
about: Report a bug unrelated to any particular site or extractor
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.11.12. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
|
||||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Read bugs section in FAQ: http://yt-dl.org/reporting
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a broken site support issue
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2020.11.12**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
|
||||||
- [ ] I've searched the bugtracker for similar bug reports including closed ones
|
|
||||||
- [ ] I've read bugs section in FAQ
|
|
||||||
|
|
||||||
|
|
||||||
## Verbose log
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
|
|
||||||
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
|
|
||||||
[debug] System config: []
|
|
||||||
[debug] User config: []
|
|
||||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
|
||||||
[debug] youtube-dl version 2020.11.12
|
|
||||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
|
||||||
[debug] Proxy map: {}
|
|
||||||
<more lines>
|
|
||||||
-->
|
|
||||||
|
|
||||||
```
|
|
||||||
PASTE VERBOSE LOG HERE
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
38
.github/ISSUE_TEMPLATE/5_feature_request.md
vendored
38
.github/ISSUE_TEMPLATE/5_feature_request.md
vendored
|
@ -1,38 +0,0 @@
|
||||||
---
|
|
||||||
name: Feature request
|
|
||||||
about: Request a new functionality unrelated to any particular site or extractor
|
|
||||||
title: ''
|
|
||||||
labels: 'request'
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.11.12. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a feature request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2020.11.12**
|
|
||||||
- [ ] I've searched the bugtracker for similar feature requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
38
.github/ISSUE_TEMPLATE/6_question.md
vendored
38
.github/ISSUE_TEMPLATE/6_question.md
vendored
|
@ -1,38 +0,0 @@
|
||||||
---
|
|
||||||
name: Ask question
|
|
||||||
about: Ask youtube-dl related question
|
|
||||||
title: ''
|
|
||||||
labels: 'question'
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions
|
|
||||||
- Search the bugtracker for similar questions: http://yt-dl.org/search-issues
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm asking a question
|
|
||||||
- [ ] I've looked through the README and FAQ for similar questions
|
|
||||||
- [ ] I've searched the bugtracker for similar questions including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Question
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE QUESTION HERE
|
|
63
.github/ISSUE_TEMPLATE_tmpl/1_broken_site.md
vendored
63
.github/ISSUE_TEMPLATE_tmpl/1_broken_site.md
vendored
|
@ -1,63 +0,0 @@
|
||||||
---
|
|
||||||
name: Broken site support
|
|
||||||
about: Report broken or misfunctioning site
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is %(version)s. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
|
||||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a broken site support
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **%(version)s**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
|
||||||
- [ ] I've searched the bugtracker for similar issues including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Verbose log
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
|
|
||||||
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
|
|
||||||
[debug] System config: []
|
|
||||||
[debug] User config: []
|
|
||||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
|
||||||
[debug] youtube-dl version %(version)s
|
|
||||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
|
||||||
[debug] Proxy map: {}
|
|
||||||
<more lines>
|
|
||||||
-->
|
|
||||||
|
|
||||||
```
|
|
||||||
PASTE VERBOSE LOG HERE
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
|
@ -1,54 +0,0 @@
|
||||||
---
|
|
||||||
name: Site support request
|
|
||||||
about: Request support for a new site
|
|
||||||
title: ''
|
|
||||||
labels: 'site-support-request'
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is %(version)s. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
|
|
||||||
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a new site support request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **%(version)s**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that none of provided URLs violate any copyrights
|
|
||||||
- [ ] I've searched the bugtracker for similar site support requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Example URLs
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
|
|
||||||
-->
|
|
||||||
|
|
||||||
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
|
|
||||||
- Single video: https://youtu.be/BaW_jenozKc
|
|
||||||
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide any additional information.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
|
@ -1,37 +0,0 @@
|
||||||
---
|
|
||||||
name: Site feature request
|
|
||||||
about: Request a new functionality for a site
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is %(version)s. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Search the bugtracker for similar site feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a site feature request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **%(version)s**
|
|
||||||
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
65
.github/ISSUE_TEMPLATE_tmpl/4_bug_report.md
vendored
65
.github/ISSUE_TEMPLATE_tmpl/4_bug_report.md
vendored
|
@ -1,65 +0,0 @@
|
||||||
---
|
|
||||||
name: Bug report
|
|
||||||
about: Report a bug unrelated to any particular site or extractor
|
|
||||||
title: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is %(version)s. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
|
||||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
|
||||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Read bugs section in FAQ: http://yt-dl.org/reporting
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a broken site support issue
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **%(version)s**
|
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
|
||||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
|
||||||
- [ ] I've searched the bugtracker for similar bug reports including closed ones
|
|
||||||
- [ ] I've read bugs section in FAQ
|
|
||||||
|
|
||||||
|
|
||||||
## Verbose log
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
|
|
||||||
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
|
|
||||||
[debug] System config: []
|
|
||||||
[debug] User config: []
|
|
||||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
|
||||||
[debug] youtube-dl version %(version)s
|
|
||||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
|
||||||
[debug] Proxy map: {}
|
|
||||||
<more lines>
|
|
||||||
-->
|
|
||||||
|
|
||||||
```
|
|
||||||
PASTE VERBOSE LOG HERE
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
If work on your issue requires account credentials please provide them or explain how one can obtain them.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
38
.github/ISSUE_TEMPLATE_tmpl/5_feature_request.md
vendored
38
.github/ISSUE_TEMPLATE_tmpl/5_feature_request.md
vendored
|
@ -1,38 +0,0 @@
|
||||||
---
|
|
||||||
name: Feature request
|
|
||||||
about: Request a new functionality unrelated to any particular site or extractor
|
|
||||||
title: ''
|
|
||||||
labels: 'request'
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
######################################################################
|
|
||||||
WARNING!
|
|
||||||
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
-->
|
|
||||||
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is %(version)s. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
|
||||||
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] I'm reporting a feature request
|
|
||||||
- [ ] I've verified that I'm running youtube-dl version **%(version)s**
|
|
||||||
- [ ] I've searched the bugtracker for similar feature requests including closed ones
|
|
||||||
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
|
|
||||||
-->
|
|
||||||
|
|
||||||
WRITE DESCRIPTION HERE
|
|
28
.github/PULL_REQUEST_TEMPLATE.md
vendored
28
.github/PULL_REQUEST_TEMPLATE.md
vendored
|
@ -1,28 +0,0 @@
|
||||||
## Please follow the guide below
|
|
||||||
|
|
||||||
- You will be asked some questions, please read them **carefully** and answer honestly
|
|
||||||
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
|
|
||||||
- Use *Preview* tab to see how your *pull request* will actually look like
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Before submitting a *pull request* make sure you have:
|
|
||||||
- [ ] At least skimmed through [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) sections
|
|
||||||
- [ ] [Searched](https://github.com/ytdl-org/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
|
|
||||||
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
|
|
||||||
|
|
||||||
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
|
|
||||||
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
|
|
||||||
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
|
|
||||||
|
|
||||||
### What is the purpose of your *pull request*?
|
|
||||||
- [ ] Bug fix
|
|
||||||
- [ ] Improvement
|
|
||||||
- [ ] New extractor
|
|
||||||
- [ ] New feature
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Description of your *pull request* and other information
|
|
||||||
|
|
||||||
Explanation of your *pull request* in arbitrary form goes here. Please make sure the description explains the purpose and effect of your *pull request* and is worded well enough to be understood. Provide as much context and examples as possible.
|
|
54
.gitignore
vendored
54
.gitignore
vendored
|
@ -1,53 +1,13 @@
|
||||||
*.pyc
|
|
||||||
*.pyo
|
|
||||||
*.class
|
|
||||||
*~
|
|
||||||
*.DS_Store
|
|
||||||
wine-py2exe/
|
|
||||||
py2exe.log
|
|
||||||
*.kate-swp
|
*.kate-swp
|
||||||
|
downloads/*
|
||||||
|
updates_key.pem
|
||||||
|
youtube_dl.egg-info
|
||||||
|
test
|
||||||
build/
|
build/
|
||||||
dist/
|
dist/
|
||||||
MANIFEST
|
|
||||||
README.txt
|
|
||||||
youtube-dl.1
|
youtube-dl.1
|
||||||
youtube-dl.bash-completion
|
|
||||||
youtube-dl.fish
|
youtube-dl.fish
|
||||||
youtube_dl/extractor/lazy_extractors.py
|
|
||||||
youtube-dl
|
|
||||||
youtube-dl.exe
|
|
||||||
youtube-dl.tar.gz
|
|
||||||
.coverage
|
|
||||||
cover/
|
|
||||||
updates_key.pem
|
|
||||||
*.egg-info
|
|
||||||
*.srt
|
|
||||||
*.ttml
|
|
||||||
*.sbv
|
|
||||||
*.vtt
|
|
||||||
*.flv
|
|
||||||
*.mp4
|
|
||||||
*.m4a
|
|
||||||
*.m4v
|
|
||||||
*.mp3
|
|
||||||
*.3gp
|
|
||||||
*.wav
|
|
||||||
*.ape
|
|
||||||
*.mkv
|
|
||||||
*.swf
|
|
||||||
*.part
|
|
||||||
*.ytdl
|
|
||||||
*.swp
|
|
||||||
test/local_parameters.json
|
|
||||||
.tox
|
|
||||||
youtube-dl.zsh
|
youtube-dl.zsh
|
||||||
|
README.txt
|
||||||
# IntelliJ related files
|
youtube-dl.bash-completion
|
||||||
.idea
|
latest_version
|
||||||
*.iml
|
|
||||||
|
|
||||||
tmp/
|
|
||||||
venv/
|
|
||||||
|
|
||||||
# VS Code related files
|
|
||||||
.vscode
|
|
||||||
|
|
32
.htaccess
Normal file
32
.htaccess
Normal file
|
@ -0,0 +1,32 @@
|
||||||
|
Options +Indexes
|
||||||
|
|
||||||
|
RewriteEngine On
|
||||||
|
RewriteRule ^ip/?$ ip.php
|
||||||
|
|
||||||
|
RewriteRule ^bugs?/? https://github.com/ytdl-org/youtube-dl/issues [R=302,L]
|
||||||
|
RewriteRule ^readme/?$ https://github.com/ytdl-org/youtube-dl/blob/master/README.md [R=302,L,NE]
|
||||||
|
RewriteRule ^reporting/? https://github.com/ytdl-org/youtube-dl/blob/master/README.md#bugs [R=302,L,NE]
|
||||||
|
RewriteRule ^update/?$ https://github.com/ytdl-org/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl [R=302,L,NE]
|
||||||
|
RewriteRule ^donat(e|ions)/?$ https://ytdl-org.github.io/youtube-dl/donations.html [R=302,L]
|
||||||
|
RewriteRule ^faq/?$ https://github.com/ytdl-org/youtube-dl/blob/master/README.md#faq [R=302,L,NE]
|
||||||
|
RewriteRule ^(faq-)?anime/?$ https://github.com/ytdl-org/youtube-dl/blob/master/README.md#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free [R=302,L,NE]
|
||||||
|
RewriteRule ^copyright-infringement/?$ https://github.com/ytdl-org/youtube-dl/blob/master/README.md#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free [R=302,L,NE]
|
||||||
|
RewriteRule ^(faq-)?citw/?$ https://github.com/ytdl-org/youtube-dl/blob/master/README.md#do-i-always-have-to-pass-in---max-quality-format-or--citw [R=302,L,NE]
|
||||||
|
RewriteRule ^(?:example-url|questions)/?$ https://github.com/ytdl-org/youtube-dl/blob/master/CONTRIBUTING.md#is-the-description-of-the-issue-itself-sufficient [R=302,L,NE]
|
||||||
|
RewriteRule ^g403/?$ https://github.com/ytdl-org/youtube-dl/blob/master/README.md#i-extracted-a-video-url-with--g-but-it-does-not-play-on-another-machine--in-my-webbrowser [R=302,L,NE]
|
||||||
|
RewriteRule ^format-selection/?$ https://github.com/ytdl-org/youtube-dl#format-selection [R=302,L,NE]
|
||||||
|
RewriteRule ^output-template/?$ https://github.com/ytdl-org/youtube-dl#output-template [R=302,L,NE]
|
||||||
|
RewriteRule ^escape/?$ https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command [R=302,L,NE]
|
||||||
|
RewriteRule ^search-issues/?$ https://github.com/ytdl-org/youtube-dl/issues?q=is:issue [R=302,L,NE]
|
||||||
|
|
||||||
|
RewriteRule ^update/LATEST_VERSION$ https://ytdl-org.github.io/youtube-dl/update/LATEST_VERSION [R=302,L]
|
||||||
|
RewriteRule ^update/versions.json https://ytdl-org.github.io/youtube-dl/update/versions.json [R=302,L]
|
||||||
|
|
||||||
|
RewriteRule ^latest/version/?$ latest_version [L,T=text/plain]
|
||||||
|
RewriteRule ^latest_version/?$ - [T=text/plain]
|
||||||
|
RewriteRule ^latest(?:/(.*))?$ /downloads/latest/$1 [R=302,L]
|
||||||
|
RewriteRule ^\.git - [F]
|
||||||
|
|
||||||
|
ErrorDocument 302 "302"
|
||||||
|
|
||||||
|
|
51
.travis.yml
51
.travis.yml
|
@ -1,50 +1,3 @@
|
||||||
|
# Just a placeholder script so that travis doesn't complain
|
||||||
language: python
|
language: python
|
||||||
python:
|
script: true
|
||||||
- "2.6"
|
|
||||||
- "2.7"
|
|
||||||
- "3.2"
|
|
||||||
- "3.3"
|
|
||||||
- "3.4"
|
|
||||||
- "3.5"
|
|
||||||
- "3.6"
|
|
||||||
- "pypy"
|
|
||||||
- "pypy3"
|
|
||||||
dist: trusty
|
|
||||||
env:
|
|
||||||
- YTDL_TEST_SET=core
|
|
||||||
- YTDL_TEST_SET=download
|
|
||||||
jobs:
|
|
||||||
include:
|
|
||||||
- python: 3.7
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=core
|
|
||||||
- python: 3.7
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=download
|
|
||||||
- python: 3.8
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=core
|
|
||||||
- python: 3.8
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=download
|
|
||||||
- python: 3.8-dev
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=core
|
|
||||||
- python: 3.8-dev
|
|
||||||
dist: xenial
|
|
||||||
env: YTDL_TEST_SET=download
|
|
||||||
- env: JYTHON=true; YTDL_TEST_SET=core
|
|
||||||
- env: JYTHON=true; YTDL_TEST_SET=download
|
|
||||||
- name: flake8
|
|
||||||
python: 3.8
|
|
||||||
dist: xenial
|
|
||||||
install: pip install flake8
|
|
||||||
script: flake8 .
|
|
||||||
fast_finish: true
|
|
||||||
allow_failures:
|
|
||||||
- env: YTDL_TEST_SET=download
|
|
||||||
- env: JYTHON=true; YTDL_TEST_SET=core
|
|
||||||
- env: JYTHON=true; YTDL_TEST_SET=download
|
|
||||||
before_install:
|
|
||||||
- if [ "$JYTHON" == "true" ]; then ./devscripts/install_jython.sh; export PATH="$HOME/jython/bin:$PATH"; fi
|
|
||||||
script: ./devscripts/run_tests.sh
|
|
||||||
|
|
248
AUTHORS
248
AUTHORS
|
@ -1,248 +0,0 @@
|
||||||
Ricardo Garcia Gonzalez
|
|
||||||
Danny Colligan
|
|
||||||
Benjamin Johnson
|
|
||||||
Vasyl' Vavrychuk
|
|
||||||
Witold Baryluk
|
|
||||||
Paweł Paprota
|
|
||||||
Gergely Imreh
|
|
||||||
Rogério Brito
|
|
||||||
Philipp Hagemeister
|
|
||||||
Sören Schulze
|
|
||||||
Kevin Ngo
|
|
||||||
Ori Avtalion
|
|
||||||
shizeeg
|
|
||||||
Filippo Valsorda
|
|
||||||
Christian Albrecht
|
|
||||||
Dave Vasilevsky
|
|
||||||
Jaime Marquínez Ferrándiz
|
|
||||||
Jeff Crouse
|
|
||||||
Osama Khalid
|
|
||||||
Michael Walter
|
|
||||||
M. Yasoob Ullah Khalid
|
|
||||||
Julien Fraichard
|
|
||||||
Johny Mo Swag
|
|
||||||
Axel Noack
|
|
||||||
Albert Kim
|
|
||||||
Pierre Rudloff
|
|
||||||
Huarong Huo
|
|
||||||
Ismael Mejía
|
|
||||||
Steffan Donal
|
|
||||||
Andras Elso
|
|
||||||
Jelle van der Waa
|
|
||||||
Marcin Cieślak
|
|
||||||
Anton Larionov
|
|
||||||
Takuya Tsuchida
|
|
||||||
Sergey M.
|
|
||||||
Michael Orlitzky
|
|
||||||
Chris Gahan
|
|
||||||
Saimadhav Heblikar
|
|
||||||
Mike Col
|
|
||||||
Oleg Prutz
|
|
||||||
pulpe
|
|
||||||
Andreas Schmitz
|
|
||||||
Michael Kaiser
|
|
||||||
Niklas Laxström
|
|
||||||
David Triendl
|
|
||||||
Anthony Weems
|
|
||||||
David Wagner
|
|
||||||
Juan C. Olivares
|
|
||||||
Mattias Harrysson
|
|
||||||
phaer
|
|
||||||
Sainyam Kapoor
|
|
||||||
Nicolas Évrard
|
|
||||||
Jason Normore
|
|
||||||
Hoje Lee
|
|
||||||
Adam Thalhammer
|
|
||||||
Georg Jähnig
|
|
||||||
Ralf Haring
|
|
||||||
Koki Takahashi
|
|
||||||
Ariset Llerena
|
|
||||||
Adam Malcontenti-Wilson
|
|
||||||
Tobias Bell
|
|
||||||
Naglis Jonaitis
|
|
||||||
Charles Chen
|
|
||||||
Hassaan Ali
|
|
||||||
Dobrosław Żybort
|
|
||||||
David Fabijan
|
|
||||||
Sebastian Haas
|
|
||||||
Alexander Kirk
|
|
||||||
Erik Johnson
|
|
||||||
Keith Beckman
|
|
||||||
Ole Ernst
|
|
||||||
Aaron McDaniel (mcd1992)
|
|
||||||
Magnus Kolstad
|
|
||||||
Hari Padmanaban
|
|
||||||
Carlos Ramos
|
|
||||||
5moufl
|
|
||||||
lenaten
|
|
||||||
Dennis Scheiba
|
|
||||||
Damon Timm
|
|
||||||
winwon
|
|
||||||
Xavier Beynon
|
|
||||||
Gabriel Schubiner
|
|
||||||
xantares
|
|
||||||
Jan Matějka
|
|
||||||
Mauroy Sébastien
|
|
||||||
William Sewell
|
|
||||||
Dao Hoang Son
|
|
||||||
Oskar Jauch
|
|
||||||
Matthew Rayfield
|
|
||||||
t0mm0
|
|
||||||
Tithen-Firion
|
|
||||||
Zack Fernandes
|
|
||||||
cryptonaut
|
|
||||||
Adrian Kretz
|
|
||||||
Mathias Rav
|
|
||||||
Petr Kutalek
|
|
||||||
Will Glynn
|
|
||||||
Max Reimann
|
|
||||||
Cédric Luthi
|
|
||||||
Thijs Vermeir
|
|
||||||
Joel Leclerc
|
|
||||||
Christopher Krooss
|
|
||||||
Ondřej Caletka
|
|
||||||
Dinesh S
|
|
||||||
Johan K. Jensen
|
|
||||||
Yen Chi Hsuan
|
|
||||||
Enam Mijbah Noor
|
|
||||||
David Luhmer
|
|
||||||
Shaya Goldberg
|
|
||||||
Paul Hartmann
|
|
||||||
Frans de Jonge
|
|
||||||
Robin de Rooij
|
|
||||||
Ryan Schmidt
|
|
||||||
Leslie P. Polzer
|
|
||||||
Duncan Keall
|
|
||||||
Alexander Mamay
|
|
||||||
Devin J. Pohly
|
|
||||||
Eduardo Ferro Aldama
|
|
||||||
Jeff Buchbinder
|
|
||||||
Amish Bhadeshia
|
|
||||||
Joram Schrijver
|
|
||||||
Will W.
|
|
||||||
Mohammad Teimori Pabandi
|
|
||||||
Roman Le Négrate
|
|
||||||
Matthias Küch
|
|
||||||
Julian Richen
|
|
||||||
Ping O.
|
|
||||||
Mister Hat
|
|
||||||
Peter Ding
|
|
||||||
jackyzy823
|
|
||||||
George Brighton
|
|
||||||
Remita Amine
|
|
||||||
Aurélio A. Heckert
|
|
||||||
Bernhard Minks
|
|
||||||
sceext
|
|
||||||
Zach Bruggeman
|
|
||||||
Tjark Saul
|
|
||||||
slangangular
|
|
||||||
Behrouz Abbasi
|
|
||||||
ngld
|
|
||||||
nyuszika7h
|
|
||||||
Shaun Walbridge
|
|
||||||
Lee Jenkins
|
|
||||||
Anssi Hannula
|
|
||||||
Lukáš Lalinský
|
|
||||||
Qijiang Fan
|
|
||||||
Rémy Léone
|
|
||||||
Marco Ferragina
|
|
||||||
reiv
|
|
||||||
Muratcan Simsek
|
|
||||||
Evan Lu
|
|
||||||
flatgreen
|
|
||||||
Brian Foley
|
|
||||||
Vignesh Venkat
|
|
||||||
Tom Gijselinck
|
|
||||||
Founder Fang
|
|
||||||
Andrew Alexeyew
|
|
||||||
Saso Bezlaj
|
|
||||||
Erwin de Haan
|
|
||||||
Jens Wille
|
|
||||||
Robin Houtevelts
|
|
||||||
Patrick Griffis
|
|
||||||
Aidan Rowe
|
|
||||||
mutantmonkey
|
|
||||||
Ben Congdon
|
|
||||||
Kacper Michajłow
|
|
||||||
José Joaquín Atria
|
|
||||||
Viťas Strádal
|
|
||||||
Kagami Hiiragi
|
|
||||||
Philip Huppert
|
|
||||||
blahgeek
|
|
||||||
Kevin Deldycke
|
|
||||||
inondle
|
|
||||||
Tomáš Čech
|
|
||||||
Déstin Reed
|
|
||||||
Roman Tsiupa
|
|
||||||
Artur Krysiak
|
|
||||||
Jakub Adam Wieczorek
|
|
||||||
Aleksandar Topuzović
|
|
||||||
Nehal Patel
|
|
||||||
Rob van Bekkum
|
|
||||||
Petr Zvoníček
|
|
||||||
Pratyush Singh
|
|
||||||
Aleksander Nitecki
|
|
||||||
Sebastian Blunt
|
|
||||||
Matěj Cepl
|
|
||||||
Xie Yanbo
|
|
||||||
Philip Xu
|
|
||||||
John Hawkinson
|
|
||||||
Rich Leeper
|
|
||||||
Zhong Jianxin
|
|
||||||
Thor77
|
|
||||||
Mattias Wadman
|
|
||||||
Arjan Verwer
|
|
||||||
Costy Petrisor
|
|
||||||
Logan B
|
|
||||||
Alex Seiler
|
|
||||||
Vijay Singh
|
|
||||||
Paul Hartmann
|
|
||||||
Stephen Chen
|
|
||||||
Fabian Stahl
|
|
||||||
Bagira
|
|
||||||
Odd Stråbø
|
|
||||||
Philip Herzog
|
|
||||||
Thomas Christlieb
|
|
||||||
Marek Rusinowski
|
|
||||||
Tobias Gruetzmacher
|
|
||||||
Olivier Bilodeau
|
|
||||||
Lars Vierbergen
|
|
||||||
Juanjo Benages
|
|
||||||
Xiao Di Guan
|
|
||||||
Thomas Winant
|
|
||||||
Daniel Twardowski
|
|
||||||
Jeremie Jarosh
|
|
||||||
Gerard Rovira
|
|
||||||
Marvin Ewald
|
|
||||||
Frédéric Bournival
|
|
||||||
Timendum
|
|
||||||
gritstub
|
|
||||||
Adam Voss
|
|
||||||
Mike Fährmann
|
|
||||||
Jan Kundrát
|
|
||||||
Giuseppe Fabiano
|
|
||||||
Örn Guðjónsson
|
|
||||||
Parmjit Virk
|
|
||||||
Genki Sky
|
|
||||||
Ľuboš Katrinec
|
|
||||||
Corey Nicholson
|
|
||||||
Ashutosh Chaudhary
|
|
||||||
John Dong
|
|
||||||
Tatsuyuki Ishi
|
|
||||||
Daniel Weber
|
|
||||||
Kay Bouché
|
|
||||||
Yang Hongbo
|
|
||||||
Lei Wang
|
|
||||||
Petr Novák
|
|
||||||
Leonardo Taccari
|
|
||||||
Martin Weinelt
|
|
||||||
Surya Oktafendri
|
|
||||||
TingPing
|
|
||||||
Alexandre Macabies
|
|
||||||
Bastian de Groot
|
|
||||||
Niklas Haas
|
|
||||||
András Veres-Szentkirályi
|
|
||||||
Enes Solak
|
|
||||||
Nathan Rossi
|
|
||||||
Thomas van der Berg
|
|
||||||
Luca Cherubin
|
|
434
CONTRIBUTING.md
434
CONTRIBUTING.md
|
@ -1,434 +0,0 @@
|
||||||
**Please include the full output of youtube-dl when run with `-v`**, i.e. **add** `-v` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this:
|
|
||||||
```
|
|
||||||
$ youtube-dl -v <your command line>
|
|
||||||
[debug] System config: []
|
|
||||||
[debug] User config: []
|
|
||||||
[debug] Command-line args: [u'-v', u'https://www.youtube.com/watch?v=BaW_jenozKcj']
|
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
|
||||||
[debug] youtube-dl version 2015.12.06
|
|
||||||
[debug] Git HEAD: 135392e
|
|
||||||
[debug] Python version 2.6.6 - Windows-2003Server-5.2.3790-SP2
|
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
|
||||||
[debug] Proxy map: {}
|
|
||||||
...
|
|
||||||
```
|
|
||||||
**Do not post screenshots of verbose logs; only plain text is acceptable.**
|
|
||||||
|
|
||||||
The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever.
|
|
||||||
|
|
||||||
Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist):
|
|
||||||
|
|
||||||
### Is the description of the issue itself sufficient?
|
|
||||||
|
|
||||||
We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts.
|
|
||||||
|
|
||||||
So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious
|
|
||||||
|
|
||||||
- What the problem is
|
|
||||||
- How it could be fixed
|
|
||||||
- How your proposed solution would look like
|
|
||||||
|
|
||||||
If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a committer myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over.
|
|
||||||
|
|
||||||
For bug reports, this means that your report should contain the *complete* output of youtube-dl when called with the `-v` flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information.
|
|
||||||
|
|
||||||
If your server has multiple IPs or you suspect censorship, adding `--call-home` may be a good idea to get more diagnostics. If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--dump-pages` (warning: this will yield a rather large output, redirect it to the file `log.txt` by adding `>log.txt 2>&1` to your command-line) or upload the `.dump` files you get when you add `--write-pages` [somewhere](https://gist.github.com/).
|
|
||||||
|
|
||||||
**Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL.
|
|
||||||
|
|
||||||
### Are you using the latest version?
|
|
||||||
|
|
||||||
Before reporting any issue, type `youtube-dl -U`. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well.
|
|
||||||
|
|
||||||
### Is the issue already documented?
|
|
||||||
|
|
||||||
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/ytdl-org/youtube-dl/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
|
|
||||||
|
|
||||||
### Why are existing options not enough?
|
|
||||||
|
|
||||||
Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/ytdl-org/youtube-dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
|
|
||||||
|
|
||||||
### Is there enough context in your bug report?
|
|
||||||
|
|
||||||
People want to solve problems, and often think they do us a favor by breaking down their larger problems (e.g. wanting to skip already downloaded files) to a specific request (e.g. requesting us to look whether the file exists before downloading the info page). However, what often happens is that they break down the problem into two steps: One simple, and one impossible (or extremely complicated one).
|
|
||||||
|
|
||||||
We are then presented with a very complicated request when the original problem could be solved far easier, e.g. by recording the downloaded video IDs in a separate file. To avoid this, you must include the greater context where it is non-obvious. In particular, every feature request that does not consist of adding support for a new site should contain a use case scenario that explains in what situation the missing feature would be useful.
|
|
||||||
|
|
||||||
### Does the issue involve one problem, and one problem only?
|
|
||||||
|
|
||||||
Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones.
|
|
||||||
|
|
||||||
In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, White house podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service.
|
|
||||||
|
|
||||||
### Is anyone going to need the feature?
|
|
||||||
|
|
||||||
Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them.
|
|
||||||
|
|
||||||
### Is your question about youtube-dl?
|
|
||||||
|
|
||||||
It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different, or even the reporter's own, application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug.
|
|
||||||
|
|
||||||
# DEVELOPER INSTRUCTIONS
|
|
||||||
|
|
||||||
Most users do not need to build youtube-dl and can [download the builds](https://ytdl-org.github.io/youtube-dl/download.html) or get them from their distribution.
|
|
||||||
|
|
||||||
To run youtube-dl as a developer, you don't need to build anything either. Simply execute
|
|
||||||
|
|
||||||
python -m youtube_dl
|
|
||||||
|
|
||||||
To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
|
|
||||||
|
|
||||||
python -m unittest discover
|
|
||||||
python test/test_download.py
|
|
||||||
nosetests
|
|
||||||
|
|
||||||
See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
|
|
||||||
|
|
||||||
If you want to create a build of youtube-dl yourself, you'll need
|
|
||||||
|
|
||||||
* python
|
|
||||||
* make (only GNU make is supported)
|
|
||||||
* pandoc
|
|
||||||
* zip
|
|
||||||
* nosetests
|
|
||||||
|
|
||||||
### Adding support for a new site
|
|
||||||
|
|
||||||
If you want to add support for a new site, first of all **make sure** this site is **not dedicated to [copyright infringement](README.md#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. youtube-dl does **not support** such sites thus pull requests adding support for them **will be rejected**.
|
|
||||||
|
|
||||||
After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called `yourextractor`):
|
|
||||||
|
|
||||||
1. [Fork this repository](https://github.com/ytdl-org/youtube-dl/fork)
|
|
||||||
2. Check out the source code with:
|
|
||||||
|
|
||||||
git clone git@github.com:YOUR_GITHUB_USERNAME/youtube-dl.git
|
|
||||||
|
|
||||||
3. Start a new git branch with
|
|
||||||
|
|
||||||
cd youtube-dl
|
|
||||||
git checkout -b yourextractor
|
|
||||||
|
|
||||||
4. Start with this simple template and save it to `youtube_dl/extractor/yourextractor.py`:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# coding: utf-8
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
from .common import InfoExtractor
|
|
||||||
|
|
||||||
|
|
||||||
class YourExtractorIE(InfoExtractor):
|
|
||||||
_VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
|
|
||||||
_TEST = {
|
|
||||||
'url': 'https://yourextractor.com/watch/42',
|
|
||||||
'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
|
|
||||||
'info_dict': {
|
|
||||||
'id': '42',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': 'Video title goes here',
|
|
||||||
'thumbnail': r're:^https?://.*\.jpg$',
|
|
||||||
# TODO more properties, either as:
|
|
||||||
# * A value
|
|
||||||
# * MD5 checksum; start the string with md5:
|
|
||||||
# * A regular expression; start the string with re:
|
|
||||||
# * Any Python type (for example int or float)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
video_id = self._match_id(url)
|
|
||||||
webpage = self._download_webpage(url, video_id)
|
|
||||||
|
|
||||||
# TODO more code goes here, for example ...
|
|
||||||
title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title')
|
|
||||||
|
|
||||||
return {
|
|
||||||
'id': video_id,
|
|
||||||
'title': title,
|
|
||||||
'description': self._og_search_description(webpage),
|
|
||||||
'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False),
|
|
||||||
# TODO more properties (see youtube_dl/extractor/common.py)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/extractors.py).
|
|
||||||
6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in.
|
|
||||||
7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303). Add tests and code for as many as you want.
|
|
||||||
8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):
|
|
||||||
|
|
||||||
$ flake8 youtube_dl/extractor/yourextractor.py
|
|
||||||
|
|
||||||
9. Make sure your code works under all [Python](https://www.python.org/) versions claimed supported by youtube-dl, namely 2.6, 2.7, and 3.2+.
|
|
||||||
10. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files and [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
|
|
||||||
|
|
||||||
$ git add youtube_dl/extractor/extractors.py
|
|
||||||
$ git add youtube_dl/extractor/yourextractor.py
|
|
||||||
$ git commit -m '[yourextractor] Add new extractor'
|
|
||||||
$ git push origin yourextractor
|
|
||||||
|
|
||||||
11. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
|
|
||||||
|
|
||||||
In any case, thank you very much for your contributions!
|
|
||||||
|
|
||||||
## youtube-dl coding conventions
|
|
||||||
|
|
||||||
This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code.
|
|
||||||
|
|
||||||
Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old youtube-dl versions working. Even though this breakage issue is easily fixed by emitting a new version of youtube-dl with a fix incorporated, all the previous versions become broken in all repositories and distros' packages that may not be so prompt in fetching the update from us. Needless to say, some non rolling release distros may never receive an update at all.
|
|
||||||
|
|
||||||
### Mandatory and optional metafields
|
|
||||||
|
|
||||||
For extraction to work youtube-dl relies on metadata your extractor extracts and provides to youtube-dl expressed by an [information dictionary](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by youtube-dl:
|
|
||||||
|
|
||||||
- `id` (media identifier)
|
|
||||||
- `title` (media title)
|
|
||||||
- `url` (media download URL) or `formats`
|
|
||||||
|
|
||||||
In fact only the last option is technically mandatory (i.e. if you can't figure out the download location of the media the extraction does not make any sense). But by convention youtube-dl also treats `id` and `title` as mandatory. Thus the aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken.
|
|
||||||
|
|
||||||
[Any field](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L188-L303) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields.
|
|
||||||
|
|
||||||
#### Example
|
|
||||||
|
|
||||||
Say you have some source dictionary `meta` that you've fetched as JSON with HTTP request and it has a key `summary`:
|
|
||||||
|
|
||||||
```python
|
|
||||||
meta = self._download_json(url, video_id)
|
|
||||||
```
|
|
||||||
|
|
||||||
Assume at this point `meta`'s layout is:
|
|
||||||
|
|
||||||
```python
|
|
||||||
{
|
|
||||||
...
|
|
||||||
"summary": "some fancy summary text",
|
|
||||||
...
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Assume you want to extract `summary` and put it into the resulting info dict as `description`. Since `description` is an optional meta field you should be ready that this key may be missing from the `meta` dict, so that you should extract it like:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = meta.get('summary') # correct
|
|
||||||
```
|
|
||||||
|
|
||||||
and not like:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = meta['summary'] # incorrect
|
|
||||||
```
|
|
||||||
|
|
||||||
The latter will break extraction process with `KeyError` if `summary` disappears from `meta` at some later time but with the former approach extraction will just go ahead with `description` set to `None` which is perfectly fine (remember `None` is equivalent to the absence of data).
|
|
||||||
|
|
||||||
Similarly, you should pass `fatal=False` when extracting optional data from a webpage with `_search_regex`, `_html_search_regex` or similar methods, for instance:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = self._search_regex(
|
|
||||||
r'<span[^>]+id="title"[^>]*>([^<]+)<',
|
|
||||||
webpage, 'description', fatal=False)
|
|
||||||
```
|
|
||||||
|
|
||||||
With `fatal` set to `False` if `_search_regex` fails to extract `description` it will emit a warning and continue extraction.
|
|
||||||
|
|
||||||
You can also pass `default=<some fallback value>`, for example:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = self._search_regex(
|
|
||||||
r'<span[^>]+id="title"[^>]*>([^<]+)<',
|
|
||||||
webpage, 'description', default=None)
|
|
||||||
```
|
|
||||||
|
|
||||||
On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present.
|
|
||||||
|
|
||||||
### Provide fallbacks
|
|
||||||
|
|
||||||
When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable.
|
|
||||||
|
|
||||||
#### Example
|
|
||||||
|
|
||||||
Say `meta` from the previous example has a `title` and you are about to extract it. Since `title` is a mandatory meta field you should end up with something like:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = meta['title']
|
|
||||||
```
|
|
||||||
|
|
||||||
If `title` disappears from `meta` in future due to some changes on the hoster's side the extraction would fail since `title` is mandatory. That's expected.
|
|
||||||
|
|
||||||
Assume that you have some another source you can extract `title` from, for example `og:title` HTML meta of a `webpage`. In this case you can provide a fallback scenario:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = meta.get('title') or self._og_search_title(webpage)
|
|
||||||
```
|
|
||||||
|
|
||||||
This code will try to extract from `meta` first and if it fails it will try extracting `og:title` from a `webpage`.
|
|
||||||
|
|
||||||
### Regular expressions
|
|
||||||
|
|
||||||
#### Don't capture groups you don't use
|
|
||||||
|
|
||||||
Capturing group must be an indication that it's used somewhere in the code. Any group that is not used must be non capturing.
|
|
||||||
|
|
||||||
##### Example
|
|
||||||
|
|
||||||
Don't capture id attribute name here since you can't use it for anything anyway.
|
|
||||||
|
|
||||||
Correct:
|
|
||||||
|
|
||||||
```python
|
|
||||||
r'(?:id|ID)=(?P<id>\d+)'
|
|
||||||
```
|
|
||||||
|
|
||||||
Incorrect:
|
|
||||||
```python
|
|
||||||
r'(id|ID)=(?P<id>\d+)'
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
#### Make regular expressions relaxed and flexible
|
|
||||||
|
|
||||||
When using regular expressions try to write them fuzzy, relaxed and flexible, skipping insignificant parts that are more likely to change, allowing both single and double quotes for quoted values and so on.
|
|
||||||
|
|
||||||
##### Example
|
|
||||||
|
|
||||||
Say you need to extract `title` from the following HTML code:
|
|
||||||
|
|
||||||
```html
|
|
||||||
<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">some fancy title</span>
|
|
||||||
```
|
|
||||||
|
|
||||||
The code for that task should look similar to:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = self._search_regex(
|
|
||||||
r'<span[^>]+class="title"[^>]*>([^<]+)', webpage, 'title')
|
|
||||||
```
|
|
||||||
|
|
||||||
Or even better:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = self._search_regex(
|
|
||||||
r'<span[^>]+class=(["\'])title\1[^>]*>(?P<title>[^<]+)',
|
|
||||||
webpage, 'title', group='title')
|
|
||||||
```
|
|
||||||
|
|
||||||
Note how you tolerate potential changes in the `style` attribute's value or switch from using double quotes to single for `class` attribute:
|
|
||||||
|
|
||||||
The code definitely should not look like:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = self._search_regex(
|
|
||||||
r'<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">(.*?)</span>',
|
|
||||||
webpage, 'title', group='title')
|
|
||||||
```
|
|
||||||
|
|
||||||
### Long lines policy
|
|
||||||
|
|
||||||
There is a soft limit to keep lines of code under 80 characters long. This means it should be respected if possible and if it does not make readability and code maintenance worse.
|
|
||||||
|
|
||||||
For example, you should **never** split long string literals like URLs or some other often copied entities over multiple lines to fit this limit:
|
|
||||||
|
|
||||||
Correct:
|
|
||||||
|
|
||||||
```python
|
|
||||||
'https://www.youtube.com/watch?v=FqZTN594JQw&list=PLMYEtVRpaqY00V9W81Cwmzp6N6vZqfUKD4'
|
|
||||||
```
|
|
||||||
|
|
||||||
Incorrect:
|
|
||||||
|
|
||||||
```python
|
|
||||||
'https://www.youtube.com/watch?v=FqZTN594JQw&list='
|
|
||||||
'PLMYEtVRpaqY00V9W81Cwmzp6N6vZqfUKD4'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Inline values
|
|
||||||
|
|
||||||
Extracting variables is acceptable for reducing code duplication and improving readability of complex expressions. However, you should avoid extracting variables used only once and moving them to opposite parts of the extractor file, which makes reading the linear flow difficult.
|
|
||||||
|
|
||||||
#### Example
|
|
||||||
|
|
||||||
Correct:
|
|
||||||
|
|
||||||
```python
|
|
||||||
title = self._html_search_regex(r'<title>([^<]+)</title>', webpage, 'title')
|
|
||||||
```
|
|
||||||
|
|
||||||
Incorrect:
|
|
||||||
|
|
||||||
```python
|
|
||||||
TITLE_RE = r'<title>([^<]+)</title>'
|
|
||||||
# ...some lines of code...
|
|
||||||
title = self._html_search_regex(TITLE_RE, webpage, 'title')
|
|
||||||
```
|
|
||||||
|
|
||||||
### Collapse fallbacks
|
|
||||||
|
|
||||||
Multiple fallback values can quickly become unwieldy. Collapse multiple fallback values into a single expression via a list of patterns.
|
|
||||||
|
|
||||||
#### Example
|
|
||||||
|
|
||||||
Good:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = self._html_search_meta(
|
|
||||||
['og:description', 'description', 'twitter:description'],
|
|
||||||
webpage, 'description', default=None)
|
|
||||||
```
|
|
||||||
|
|
||||||
Unwieldy:
|
|
||||||
|
|
||||||
```python
|
|
||||||
description = (
|
|
||||||
self._og_search_description(webpage, default=None)
|
|
||||||
or self._html_search_meta('description', webpage, default=None)
|
|
||||||
or self._html_search_meta('twitter:description', webpage, default=None))
|
|
||||||
```
|
|
||||||
|
|
||||||
Methods supporting list of patterns are: `_search_regex`, `_html_search_regex`, `_og_search_property`, `_html_search_meta`.
|
|
||||||
|
|
||||||
### Trailing parentheses
|
|
||||||
|
|
||||||
Always move trailing parentheses after the last argument.
|
|
||||||
|
|
||||||
#### Example
|
|
||||||
|
|
||||||
Correct:
|
|
||||||
|
|
||||||
```python
|
|
||||||
lambda x: x['ResultSet']['Result'][0]['VideoUrlSet']['VideoUrl'],
|
|
||||||
list)
|
|
||||||
```
|
|
||||||
|
|
||||||
Incorrect:
|
|
||||||
|
|
||||||
```python
|
|
||||||
lambda x: x['ResultSet']['Result'][0]['VideoUrlSet']['VideoUrl'],
|
|
||||||
list,
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Use convenience conversion and parsing functions
|
|
||||||
|
|
||||||
Wrap all extracted numeric data into safe functions from [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py): `int_or_none`, `float_or_none`. Use them for string to number conversions as well.
|
|
||||||
|
|
||||||
Use `url_or_none` for safe URL processing.
|
|
||||||
|
|
||||||
Use `try_get` for safe metadata extraction from parsed JSON.
|
|
||||||
|
|
||||||
Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction.
|
|
||||||
|
|
||||||
Explore [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py) for more useful convenience functions.
|
|
||||||
|
|
||||||
#### More examples
|
|
||||||
|
|
||||||
##### Safely extract optional description from parsed JSON
|
|
||||||
```python
|
|
||||||
description = try_get(response, lambda x: x['result']['video'][0]['summary'], compat_str)
|
|
||||||
```
|
|
||||||
|
|
||||||
##### Safely extract more optional metadata
|
|
||||||
```python
|
|
||||||
video = try_get(response, lambda x: x['result']['video'][0], dict) or {}
|
|
||||||
description = video.get('summary')
|
|
||||||
duration = float_or_none(video.get('durationMs'), scale=1000)
|
|
||||||
view_count = int_or_none(video.get('views'))
|
|
||||||
```
|
|
||||||
|
|
24
LICENSE
24
LICENSE
|
@ -1,24 +0,0 @@
|
||||||
This is free and unencumbered software released into the public domain.
|
|
||||||
|
|
||||||
Anyone is free to copy, modify, publish, use, compile, sell, or
|
|
||||||
distribute this software, either in source code form or as a compiled
|
|
||||||
binary, for any purpose, commercial or non-commercial, and by any
|
|
||||||
means.
|
|
||||||
|
|
||||||
In jurisdictions that recognize copyright laws, the author or authors
|
|
||||||
of this software dedicate any and all copyright interest in the
|
|
||||||
software to the public domain. We make this dedication for the benefit
|
|
||||||
of the public at large and to the detriment of our heirs and
|
|
||||||
successors. We intend this dedication to be an overt act of
|
|
||||||
relinquishment in perpetuity of all present and future rights to this
|
|
||||||
software under copyright law.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
|
||||||
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
|
||||||
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
|
||||||
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
|
||||||
OTHER DEALINGS IN THE SOFTWARE.
|
|
||||||
|
|
||||||
For more information, please refer to <http://unlicense.org/>
|
|
|
@ -1,9 +0,0 @@
|
||||||
include README.md
|
|
||||||
include LICENSE
|
|
||||||
include AUTHORS
|
|
||||||
include ChangeLog
|
|
||||||
include youtube-dl.bash-completion
|
|
||||||
include youtube-dl.fish
|
|
||||||
include youtube-dl.1
|
|
||||||
recursive-include docs Makefile conf.py *.rst
|
|
||||||
recursive-include test *
|
|
135
Makefile
135
Makefile
|
@ -1,135 +0,0 @@
|
||||||
all: youtube-dl README.md CONTRIBUTING.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish supportedsites
|
|
||||||
|
|
||||||
clean:
|
|
||||||
rm -rf youtube-dl.1.temp.md youtube-dl.1 youtube-dl.bash-completion README.txt MANIFEST build/ dist/ .coverage cover/ youtube-dl.tar.gz youtube-dl.zsh youtube-dl.fish youtube_dl/extractor/lazy_extractors.py *.dump *.part* *.ytdl *.info.json *.mp4 *.m4a *.flv *.mp3 *.avi *.mkv *.webm *.3gp *.wav *.ape *.swf *.jpg *.png CONTRIBUTING.md.tmp youtube-dl youtube-dl.exe
|
|
||||||
find . -name "*.pyc" -delete
|
|
||||||
find . -name "*.class" -delete
|
|
||||||
|
|
||||||
PREFIX ?= /usr/local
|
|
||||||
BINDIR ?= $(PREFIX)/bin
|
|
||||||
MANDIR ?= $(PREFIX)/man
|
|
||||||
SHAREDIR ?= $(PREFIX)/share
|
|
||||||
PYTHON ?= /usr/bin/env python
|
|
||||||
|
|
||||||
# set SYSCONFDIR to /etc if PREFIX=/usr or PREFIX=/usr/local
|
|
||||||
SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then echo /etc; else echo $(PREFIX)/etc; fi)
|
|
||||||
|
|
||||||
# set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2
|
|
||||||
MARKDOWN = $(shell if [ `pandoc -v | head -n1 | cut -d" " -f2 | head -c1` = "2" ]; then echo markdown-smart; else echo markdown; fi)
|
|
||||||
|
|
||||||
install: youtube-dl youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish
|
|
||||||
install -d $(DESTDIR)$(BINDIR)
|
|
||||||
install -m 755 youtube-dl $(DESTDIR)$(BINDIR)
|
|
||||||
install -d $(DESTDIR)$(MANDIR)/man1
|
|
||||||
install -m 644 youtube-dl.1 $(DESTDIR)$(MANDIR)/man1
|
|
||||||
install -d $(DESTDIR)$(SYSCONFDIR)/bash_completion.d
|
|
||||||
install -m 644 youtube-dl.bash-completion $(DESTDIR)$(SYSCONFDIR)/bash_completion.d/youtube-dl
|
|
||||||
install -d $(DESTDIR)$(SHAREDIR)/zsh/site-functions
|
|
||||||
install -m 644 youtube-dl.zsh $(DESTDIR)$(SHAREDIR)/zsh/site-functions/_youtube-dl
|
|
||||||
install -d $(DESTDIR)$(SYSCONFDIR)/fish/completions
|
|
||||||
install -m 644 youtube-dl.fish $(DESTDIR)$(SYSCONFDIR)/fish/completions/youtube-dl.fish
|
|
||||||
|
|
||||||
codetest:
|
|
||||||
flake8 .
|
|
||||||
|
|
||||||
test:
|
|
||||||
#nosetests --with-coverage --cover-package=youtube_dl --cover-html --verbose --processes 4 test
|
|
||||||
nosetests --verbose test
|
|
||||||
$(MAKE) codetest
|
|
||||||
|
|
||||||
ot: offlinetest
|
|
||||||
|
|
||||||
# Keep this list in sync with devscripts/run_tests.sh
|
|
||||||
offlinetest: codetest
|
|
||||||
$(PYTHON) -m nose --verbose test \
|
|
||||||
--exclude test_age_restriction.py \
|
|
||||||
--exclude test_download.py \
|
|
||||||
--exclude test_iqiyi_sdk_interpreter.py \
|
|
||||||
--exclude test_socks.py \
|
|
||||||
--exclude test_subtitles.py \
|
|
||||||
--exclude test_write_annotations.py \
|
|
||||||
--exclude test_youtube_lists.py \
|
|
||||||
--exclude test_youtube_signature.py
|
|
||||||
|
|
||||||
tar: youtube-dl.tar.gz
|
|
||||||
|
|
||||||
.PHONY: all clean install test tar bash-completion pypi-files zsh-completion fish-completion ot offlinetest codetest supportedsites
|
|
||||||
|
|
||||||
pypi-files: youtube-dl.bash-completion README.txt youtube-dl.1 youtube-dl.fish
|
|
||||||
|
|
||||||
youtube-dl: youtube_dl/*.py youtube_dl/*/*.py
|
|
||||||
mkdir -p zip
|
|
||||||
for d in youtube_dl youtube_dl/downloader youtube_dl/extractor youtube_dl/postprocessor ; do \
|
|
||||||
mkdir -p zip/$$d ;\
|
|
||||||
cp -pPR $$d/*.py zip/$$d/ ;\
|
|
||||||
done
|
|
||||||
touch -t 200001010101 zip/youtube_dl/*.py zip/youtube_dl/*/*.py
|
|
||||||
mv zip/youtube_dl/__main__.py zip/
|
|
||||||
cd zip ; zip -q ../youtube-dl youtube_dl/*.py youtube_dl/*/*.py __main__.py
|
|
||||||
rm -rf zip
|
|
||||||
echo '#!$(PYTHON)' > youtube-dl
|
|
||||||
cat youtube-dl.zip >> youtube-dl
|
|
||||||
rm youtube-dl.zip
|
|
||||||
chmod a+x youtube-dl
|
|
||||||
|
|
||||||
README.md: youtube_dl/*.py youtube_dl/*/*.py
|
|
||||||
COLUMNS=80 $(PYTHON) youtube_dl/__main__.py --help | $(PYTHON) devscripts/make_readme.py
|
|
||||||
|
|
||||||
CONTRIBUTING.md: README.md
|
|
||||||
$(PYTHON) devscripts/make_contributing.py README.md CONTRIBUTING.md
|
|
||||||
|
|
||||||
issuetemplates: devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md youtube_dl/version.py
|
|
||||||
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE/1_broken_site.md
|
|
||||||
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE/2_site_support_request.md
|
|
||||||
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE/3_site_feature_request.md
|
|
||||||
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE/4_bug_report.md
|
|
||||||
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md .github/ISSUE_TEMPLATE/5_feature_request.md
|
|
||||||
|
|
||||||
supportedsites:
|
|
||||||
$(PYTHON) devscripts/make_supportedsites.py docs/supportedsites.md
|
|
||||||
|
|
||||||
README.txt: README.md
|
|
||||||
pandoc -f $(MARKDOWN) -t plain README.md -o README.txt
|
|
||||||
|
|
||||||
youtube-dl.1: README.md
|
|
||||||
$(PYTHON) devscripts/prepare_manpage.py youtube-dl.1.temp.md
|
|
||||||
pandoc -s -f $(MARKDOWN) -t man youtube-dl.1.temp.md -o youtube-dl.1
|
|
||||||
rm -f youtube-dl.1.temp.md
|
|
||||||
|
|
||||||
youtube-dl.bash-completion: youtube_dl/*.py youtube_dl/*/*.py devscripts/bash-completion.in
|
|
||||||
$(PYTHON) devscripts/bash-completion.py
|
|
||||||
|
|
||||||
bash-completion: youtube-dl.bash-completion
|
|
||||||
|
|
||||||
youtube-dl.zsh: youtube_dl/*.py youtube_dl/*/*.py devscripts/zsh-completion.in
|
|
||||||
$(PYTHON) devscripts/zsh-completion.py
|
|
||||||
|
|
||||||
zsh-completion: youtube-dl.zsh
|
|
||||||
|
|
||||||
youtube-dl.fish: youtube_dl/*.py youtube_dl/*/*.py devscripts/fish-completion.in
|
|
||||||
$(PYTHON) devscripts/fish-completion.py
|
|
||||||
|
|
||||||
fish-completion: youtube-dl.fish
|
|
||||||
|
|
||||||
lazy-extractors: youtube_dl/extractor/lazy_extractors.py
|
|
||||||
|
|
||||||
_EXTRACTOR_FILES = $(shell find youtube_dl/extractor -iname '*.py' -and -not -iname 'lazy_extractors.py')
|
|
||||||
youtube_dl/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES)
|
|
||||||
$(PYTHON) devscripts/make_lazy_extractors.py $@
|
|
||||||
|
|
||||||
youtube-dl.tar.gz: youtube-dl README.md README.txt youtube-dl.1 youtube-dl.bash-completion youtube-dl.zsh youtube-dl.fish ChangeLog AUTHORS
|
|
||||||
@tar -czf youtube-dl.tar.gz --transform "s|^|youtube-dl/|" --owner 0 --group 0 \
|
|
||||||
--exclude '*.DS_Store' \
|
|
||||||
--exclude '*.kate-swp' \
|
|
||||||
--exclude '*.pyc' \
|
|
||||||
--exclude '*.pyo' \
|
|
||||||
--exclude '*~' \
|
|
||||||
--exclude '__pycache__' \
|
|
||||||
--exclude '.git' \
|
|
||||||
--exclude 'docs/_build' \
|
|
||||||
-- \
|
|
||||||
bin devscripts test youtube_dl docs \
|
|
||||||
ChangeLog AUTHORS LICENSE README.md README.txt \
|
|
||||||
Makefile MANIFEST.in youtube-dl.1 youtube-dl.bash-completion \
|
|
||||||
youtube-dl.zsh youtube-dl.fish setup.py setup.cfg \
|
|
||||||
youtube-dl
|
|
|
@ -1,6 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
import youtube_dl
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
youtube_dl.main()
|
|
Binary file not shown.
Binary file not shown.
|
@ -1,29 +0,0 @@
|
||||||
__youtube_dl()
|
|
||||||
{
|
|
||||||
local cur prev opts fileopts diropts keywords
|
|
||||||
COMPREPLY=()
|
|
||||||
cur="${COMP_WORDS[COMP_CWORD]}"
|
|
||||||
prev="${COMP_WORDS[COMP_CWORD-1]}"
|
|
||||||
opts="{{flags}}"
|
|
||||||
keywords=":ytfavorites :ytrecommended :ytsubscriptions :ytwatchlater :ythistory"
|
|
||||||
fileopts="-a|--batch-file|--download-archive|--cookies|--load-info"
|
|
||||||
diropts="--cache-dir"
|
|
||||||
|
|
||||||
if [[ ${prev} =~ ${fileopts} ]]; then
|
|
||||||
COMPREPLY=( $(compgen -f -- ${cur}) )
|
|
||||||
return 0
|
|
||||||
elif [[ ${prev} =~ ${diropts} ]]; then
|
|
||||||
COMPREPLY=( $(compgen -d -- ${cur}) )
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ ${cur} =~ : ]]; then
|
|
||||||
COMPREPLY=( $(compgen -W "${keywords}" -- ${cur}) )
|
|
||||||
return 0
|
|
||||||
elif [[ ${cur} == * ]] ; then
|
|
||||||
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
complete -F __youtube_dl youtube-dl
|
|
|
@ -1,30 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import os
|
|
||||||
from os.path import dirname as dirn
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
|
|
||||||
import youtube_dl
|
|
||||||
|
|
||||||
BASH_COMPLETION_FILE = "youtube-dl.bash-completion"
|
|
||||||
BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in"
|
|
||||||
|
|
||||||
|
|
||||||
def build_completion(opt_parser):
|
|
||||||
opts_flag = []
|
|
||||||
for group in opt_parser.option_groups:
|
|
||||||
for option in group.option_list:
|
|
||||||
# for every long flag
|
|
||||||
opts_flag.append(option.get_opt_string())
|
|
||||||
with open(BASH_COMPLETION_TEMPLATE) as f:
|
|
||||||
template = f.read()
|
|
||||||
with open(BASH_COMPLETION_FILE, "w") as f:
|
|
||||||
# just using the special char
|
|
||||||
filled_template = template.replace("{{flags}}", " ".join(opts_flag))
|
|
||||||
f.write(filled_template)
|
|
||||||
|
|
||||||
|
|
||||||
parser = youtube_dl.parseOpts()[0]
|
|
||||||
build_completion(parser)
|
|
|
@ -1,433 +0,0 @@
|
||||||
#!/usr/bin/python3
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import ctypes
|
|
||||||
import functools
|
|
||||||
import shutil
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
import tempfile
|
|
||||||
import threading
|
|
||||||
import traceback
|
|
||||||
import os.path
|
|
||||||
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname((os.path.abspath(__file__)))))
|
|
||||||
from youtube_dl.compat import (
|
|
||||||
compat_input,
|
|
||||||
compat_http_server,
|
|
||||||
compat_str,
|
|
||||||
compat_urlparse,
|
|
||||||
)
|
|
||||||
|
|
||||||
# These are not used outside of buildserver.py thus not in compat.py
|
|
||||||
|
|
||||||
try:
|
|
||||||
import winreg as compat_winreg
|
|
||||||
except ImportError: # Python 2
|
|
||||||
import _winreg as compat_winreg
|
|
||||||
|
|
||||||
try:
|
|
||||||
import socketserver as compat_socketserver
|
|
||||||
except ImportError: # Python 2
|
|
||||||
import SocketServer as compat_socketserver
|
|
||||||
|
|
||||||
|
|
||||||
class BuildHTTPServer(compat_socketserver.ThreadingMixIn, compat_http_server.HTTPServer):
|
|
||||||
allow_reuse_address = True
|
|
||||||
|
|
||||||
|
|
||||||
advapi32 = ctypes.windll.advapi32
|
|
||||||
|
|
||||||
SC_MANAGER_ALL_ACCESS = 0xf003f
|
|
||||||
SC_MANAGER_CREATE_SERVICE = 0x02
|
|
||||||
SERVICE_WIN32_OWN_PROCESS = 0x10
|
|
||||||
SERVICE_AUTO_START = 0x2
|
|
||||||
SERVICE_ERROR_NORMAL = 0x1
|
|
||||||
DELETE = 0x00010000
|
|
||||||
SERVICE_STATUS_START_PENDING = 0x00000002
|
|
||||||
SERVICE_STATUS_RUNNING = 0x00000004
|
|
||||||
SERVICE_ACCEPT_STOP = 0x1
|
|
||||||
|
|
||||||
SVCNAME = 'youtubedl_builder'
|
|
||||||
|
|
||||||
LPTSTR = ctypes.c_wchar_p
|
|
||||||
START_CALLBACK = ctypes.WINFUNCTYPE(None, ctypes.c_int, ctypes.POINTER(LPTSTR))
|
|
||||||
|
|
||||||
|
|
||||||
class SERVICE_TABLE_ENTRY(ctypes.Structure):
|
|
||||||
_fields_ = [
|
|
||||||
('lpServiceName', LPTSTR),
|
|
||||||
('lpServiceProc', START_CALLBACK)
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
HandlerEx = ctypes.WINFUNCTYPE(
|
|
||||||
ctypes.c_int, # return
|
|
||||||
ctypes.c_int, # dwControl
|
|
||||||
ctypes.c_int, # dwEventType
|
|
||||||
ctypes.c_void_p, # lpEventData,
|
|
||||||
ctypes.c_void_p, # lpContext,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def _ctypes_array(c_type, py_array):
|
|
||||||
ar = (c_type * len(py_array))()
|
|
||||||
ar[:] = py_array
|
|
||||||
return ar
|
|
||||||
|
|
||||||
|
|
||||||
def win_OpenSCManager():
|
|
||||||
res = advapi32.OpenSCManagerW(None, None, SC_MANAGER_ALL_ACCESS)
|
|
||||||
if not res:
|
|
||||||
raise Exception('Opening service manager failed - '
|
|
||||||
'are you running this as administrator?')
|
|
||||||
return res
|
|
||||||
|
|
||||||
|
|
||||||
def win_install_service(service_name, cmdline):
|
|
||||||
manager = win_OpenSCManager()
|
|
||||||
try:
|
|
||||||
h = advapi32.CreateServiceW(
|
|
||||||
manager, service_name, None,
|
|
||||||
SC_MANAGER_CREATE_SERVICE, SERVICE_WIN32_OWN_PROCESS,
|
|
||||||
SERVICE_AUTO_START, SERVICE_ERROR_NORMAL,
|
|
||||||
cmdline, None, None, None, None, None)
|
|
||||||
if not h:
|
|
||||||
raise OSError('Service creation failed: %s' % ctypes.FormatError())
|
|
||||||
|
|
||||||
advapi32.CloseServiceHandle(h)
|
|
||||||
finally:
|
|
||||||
advapi32.CloseServiceHandle(manager)
|
|
||||||
|
|
||||||
|
|
||||||
def win_uninstall_service(service_name):
|
|
||||||
manager = win_OpenSCManager()
|
|
||||||
try:
|
|
||||||
h = advapi32.OpenServiceW(manager, service_name, DELETE)
|
|
||||||
if not h:
|
|
||||||
raise OSError('Could not find service %s: %s' % (
|
|
||||||
service_name, ctypes.FormatError()))
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not advapi32.DeleteService(h):
|
|
||||||
raise OSError('Deletion failed: %s' % ctypes.FormatError())
|
|
||||||
finally:
|
|
||||||
advapi32.CloseServiceHandle(h)
|
|
||||||
finally:
|
|
||||||
advapi32.CloseServiceHandle(manager)
|
|
||||||
|
|
||||||
|
|
||||||
def win_service_report_event(service_name, msg, is_error=True):
|
|
||||||
with open('C:/sshkeys/log', 'a', encoding='utf-8') as f:
|
|
||||||
f.write(msg + '\n')
|
|
||||||
|
|
||||||
event_log = advapi32.RegisterEventSourceW(None, service_name)
|
|
||||||
if not event_log:
|
|
||||||
raise OSError('Could not report event: %s' % ctypes.FormatError())
|
|
||||||
|
|
||||||
try:
|
|
||||||
type_id = 0x0001 if is_error else 0x0004
|
|
||||||
event_id = 0xc0000000 if is_error else 0x40000000
|
|
||||||
lines = _ctypes_array(LPTSTR, [msg])
|
|
||||||
|
|
||||||
if not advapi32.ReportEventW(
|
|
||||||
event_log, type_id, 0, event_id, None, len(lines), 0,
|
|
||||||
lines, None):
|
|
||||||
raise OSError('Event reporting failed: %s' % ctypes.FormatError())
|
|
||||||
finally:
|
|
||||||
advapi32.DeregisterEventSource(event_log)
|
|
||||||
|
|
||||||
|
|
||||||
def win_service_handler(stop_event, *args):
|
|
||||||
try:
|
|
||||||
raise ValueError('Handler called with args ' + repr(args))
|
|
||||||
TODO
|
|
||||||
except Exception as e:
|
|
||||||
tb = traceback.format_exc()
|
|
||||||
msg = str(e) + '\n' + tb
|
|
||||||
win_service_report_event(service_name, msg, is_error=True)
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
def win_service_set_status(handle, status_code):
|
|
||||||
svcStatus = SERVICE_STATUS()
|
|
||||||
svcStatus.dwServiceType = SERVICE_WIN32_OWN_PROCESS
|
|
||||||
svcStatus.dwCurrentState = status_code
|
|
||||||
svcStatus.dwControlsAccepted = SERVICE_ACCEPT_STOP
|
|
||||||
|
|
||||||
svcStatus.dwServiceSpecificExitCode = 0
|
|
||||||
|
|
||||||
if not advapi32.SetServiceStatus(handle, ctypes.byref(svcStatus)):
|
|
||||||
raise OSError('SetServiceStatus failed: %r' % ctypes.FormatError())
|
|
||||||
|
|
||||||
|
|
||||||
def win_service_main(service_name, real_main, argc, argv_raw):
|
|
||||||
try:
|
|
||||||
# args = [argv_raw[i].value for i in range(argc)]
|
|
||||||
stop_event = threading.Event()
|
|
||||||
handler = HandlerEx(functools.partial(stop_event, win_service_handler))
|
|
||||||
h = advapi32.RegisterServiceCtrlHandlerExW(service_name, handler, None)
|
|
||||||
if not h:
|
|
||||||
raise OSError('Handler registration failed: %s' %
|
|
||||||
ctypes.FormatError())
|
|
||||||
|
|
||||||
TODO
|
|
||||||
except Exception as e:
|
|
||||||
tb = traceback.format_exc()
|
|
||||||
msg = str(e) + '\n' + tb
|
|
||||||
win_service_report_event(service_name, msg, is_error=True)
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
def win_service_start(service_name, real_main):
|
|
||||||
try:
|
|
||||||
cb = START_CALLBACK(
|
|
||||||
functools.partial(win_service_main, service_name, real_main))
|
|
||||||
dispatch_table = _ctypes_array(SERVICE_TABLE_ENTRY, [
|
|
||||||
SERVICE_TABLE_ENTRY(
|
|
||||||
service_name,
|
|
||||||
cb
|
|
||||||
),
|
|
||||||
SERVICE_TABLE_ENTRY(None, ctypes.cast(None, START_CALLBACK))
|
|
||||||
])
|
|
||||||
|
|
||||||
if not advapi32.StartServiceCtrlDispatcherW(dispatch_table):
|
|
||||||
raise OSError('ctypes start failed: %s' % ctypes.FormatError())
|
|
||||||
except Exception as e:
|
|
||||||
tb = traceback.format_exc()
|
|
||||||
msg = str(e) + '\n' + tb
|
|
||||||
win_service_report_event(service_name, msg, is_error=True)
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
def main(args=None):
|
|
||||||
parser = argparse.ArgumentParser()
|
|
||||||
parser.add_argument('-i', '--install',
|
|
||||||
action='store_const', dest='action', const='install',
|
|
||||||
help='Launch at Windows startup')
|
|
||||||
parser.add_argument('-u', '--uninstall',
|
|
||||||
action='store_const', dest='action', const='uninstall',
|
|
||||||
help='Remove Windows service')
|
|
||||||
parser.add_argument('-s', '--service',
|
|
||||||
action='store_const', dest='action', const='service',
|
|
||||||
help='Run as a Windows service')
|
|
||||||
parser.add_argument('-b', '--bind', metavar='<host:port>',
|
|
||||||
action='store', default='0.0.0.0:8142',
|
|
||||||
help='Bind to host:port (default %default)')
|
|
||||||
options = parser.parse_args(args=args)
|
|
||||||
|
|
||||||
if options.action == 'install':
|
|
||||||
fn = os.path.abspath(__file__).replace('v:', '\\\\vboxsrv\\vbox')
|
|
||||||
cmdline = '%s %s -s -b %s' % (sys.executable, fn, options.bind)
|
|
||||||
win_install_service(SVCNAME, cmdline)
|
|
||||||
return
|
|
||||||
|
|
||||||
if options.action == 'uninstall':
|
|
||||||
win_uninstall_service(SVCNAME)
|
|
||||||
return
|
|
||||||
|
|
||||||
if options.action == 'service':
|
|
||||||
win_service_start(SVCNAME, main)
|
|
||||||
return
|
|
||||||
|
|
||||||
host, port_str = options.bind.split(':')
|
|
||||||
port = int(port_str)
|
|
||||||
|
|
||||||
print('Listening on %s:%d' % (host, port))
|
|
||||||
srv = BuildHTTPServer((host, port), BuildHTTPRequestHandler)
|
|
||||||
thr = threading.Thread(target=srv.serve_forever)
|
|
||||||
thr.start()
|
|
||||||
compat_input('Press ENTER to shut down')
|
|
||||||
srv.shutdown()
|
|
||||||
thr.join()
|
|
||||||
|
|
||||||
|
|
||||||
def rmtree(path):
|
|
||||||
for name in os.listdir(path):
|
|
||||||
fname = os.path.join(path, name)
|
|
||||||
if os.path.isdir(fname):
|
|
||||||
rmtree(fname)
|
|
||||||
else:
|
|
||||||
os.chmod(fname, 0o666)
|
|
||||||
os.remove(fname)
|
|
||||||
os.rmdir(path)
|
|
||||||
|
|
||||||
|
|
||||||
class BuildError(Exception):
|
|
||||||
def __init__(self, output, code=500):
|
|
||||||
self.output = output
|
|
||||||
self.code = code
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return self.output
|
|
||||||
|
|
||||||
|
|
||||||
class HTTPError(BuildError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class PythonBuilder(object):
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
python_version = kwargs.pop('python', '3.4')
|
|
||||||
python_path = None
|
|
||||||
for node in ('Wow6432Node\\', ''):
|
|
||||||
try:
|
|
||||||
key = compat_winreg.OpenKey(
|
|
||||||
compat_winreg.HKEY_LOCAL_MACHINE,
|
|
||||||
r'SOFTWARE\%sPython\PythonCore\%s\InstallPath' % (node, python_version))
|
|
||||||
try:
|
|
||||||
python_path, _ = compat_winreg.QueryValueEx(key, '')
|
|
||||||
finally:
|
|
||||||
compat_winreg.CloseKey(key)
|
|
||||||
break
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
if not python_path:
|
|
||||||
raise BuildError('No such Python version: %s' % python_version)
|
|
||||||
|
|
||||||
self.pythonPath = python_path
|
|
||||||
|
|
||||||
super(PythonBuilder, self).__init__(**kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
class GITInfoBuilder(object):
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
try:
|
|
||||||
self.user, self.repoName = kwargs['path'][:2]
|
|
||||||
self.rev = kwargs.pop('rev')
|
|
||||||
except ValueError:
|
|
||||||
raise BuildError('Invalid path')
|
|
||||||
except KeyError as e:
|
|
||||||
raise BuildError('Missing mandatory parameter "%s"' % e.args[0])
|
|
||||||
|
|
||||||
path = os.path.join(os.environ['APPDATA'], 'Build archive', self.repoName, self.user)
|
|
||||||
if not os.path.exists(path):
|
|
||||||
os.makedirs(path)
|
|
||||||
self.basePath = tempfile.mkdtemp(dir=path)
|
|
||||||
self.buildPath = os.path.join(self.basePath, 'build')
|
|
||||||
|
|
||||||
super(GITInfoBuilder, self).__init__(**kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
class GITBuilder(GITInfoBuilder):
|
|
||||||
def build(self):
|
|
||||||
try:
|
|
||||||
subprocess.check_output(['git', 'clone', 'git://github.com/%s/%s.git' % (self.user, self.repoName), self.buildPath])
|
|
||||||
subprocess.check_output(['git', 'checkout', self.rev], cwd=self.buildPath)
|
|
||||||
except subprocess.CalledProcessError as e:
|
|
||||||
raise BuildError(e.output)
|
|
||||||
|
|
||||||
super(GITBuilder, self).build()
|
|
||||||
|
|
||||||
|
|
||||||
class YoutubeDLBuilder(object):
|
|
||||||
authorizedUsers = ['fraca7', 'phihag', 'rg3', 'FiloSottile', 'ytdl-org']
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
if self.repoName != 'youtube-dl':
|
|
||||||
raise BuildError('Invalid repository "%s"' % self.repoName)
|
|
||||||
if self.user not in self.authorizedUsers:
|
|
||||||
raise HTTPError('Unauthorized user "%s"' % self.user, 401)
|
|
||||||
|
|
||||||
super(YoutubeDLBuilder, self).__init__(**kwargs)
|
|
||||||
|
|
||||||
def build(self):
|
|
||||||
try:
|
|
||||||
proc = subprocess.Popen([os.path.join(self.pythonPath, 'python.exe'), 'setup.py', 'py2exe'], stdin=subprocess.PIPE, cwd=self.buildPath)
|
|
||||||
proc.wait()
|
|
||||||
#subprocess.check_output([os.path.join(self.pythonPath, 'python.exe'), 'setup.py', 'py2exe'],
|
|
||||||
# cwd=self.buildPath)
|
|
||||||
except subprocess.CalledProcessError as e:
|
|
||||||
raise BuildError(e.output)
|
|
||||||
|
|
||||||
super(YoutubeDLBuilder, self).build()
|
|
||||||
|
|
||||||
|
|
||||||
class DownloadBuilder(object):
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
self.handler = kwargs.pop('handler')
|
|
||||||
self.srcPath = os.path.join(self.buildPath, *tuple(kwargs['path'][2:]))
|
|
||||||
self.srcPath = os.path.abspath(os.path.normpath(self.srcPath))
|
|
||||||
if not self.srcPath.startswith(self.buildPath):
|
|
||||||
raise HTTPError(self.srcPath, 401)
|
|
||||||
|
|
||||||
super(DownloadBuilder, self).__init__(**kwargs)
|
|
||||||
|
|
||||||
def build(self):
|
|
||||||
if not os.path.exists(self.srcPath):
|
|
||||||
raise HTTPError('No such file', 404)
|
|
||||||
if os.path.isdir(self.srcPath):
|
|
||||||
raise HTTPError('Is a directory: %s' % self.srcPath, 401)
|
|
||||||
|
|
||||||
self.handler.send_response(200)
|
|
||||||
self.handler.send_header('Content-Type', 'application/octet-stream')
|
|
||||||
self.handler.send_header('Content-Disposition', 'attachment; filename=%s' % os.path.split(self.srcPath)[-1])
|
|
||||||
self.handler.send_header('Content-Length', str(os.stat(self.srcPath).st_size))
|
|
||||||
self.handler.end_headers()
|
|
||||||
|
|
||||||
with open(self.srcPath, 'rb') as src:
|
|
||||||
shutil.copyfileobj(src, self.handler.wfile)
|
|
||||||
|
|
||||||
super(DownloadBuilder, self).build()
|
|
||||||
|
|
||||||
|
|
||||||
class CleanupTempDir(object):
|
|
||||||
def build(self):
|
|
||||||
try:
|
|
||||||
rmtree(self.basePath)
|
|
||||||
except Exception as e:
|
|
||||||
print('WARNING deleting "%s": %s' % (self.basePath, e))
|
|
||||||
|
|
||||||
super(CleanupTempDir, self).build()
|
|
||||||
|
|
||||||
|
|
||||||
class Null(object):
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def start(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def close(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def build(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class Builder(PythonBuilder, GITBuilder, YoutubeDLBuilder, DownloadBuilder, CleanupTempDir, Null):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class BuildHTTPRequestHandler(compat_http_server.BaseHTTPRequestHandler):
|
|
||||||
actionDict = {'build': Builder, 'download': Builder} # They're the same, no more caching.
|
|
||||||
|
|
||||||
def do_GET(self):
|
|
||||||
path = compat_urlparse.urlparse(self.path)
|
|
||||||
paramDict = dict([(key, value[0]) for key, value in compat_urlparse.parse_qs(path.query).items()])
|
|
||||||
action, _, path = path.path.strip('/').partition('/')
|
|
||||||
if path:
|
|
||||||
path = path.split('/')
|
|
||||||
if action in self.actionDict:
|
|
||||||
try:
|
|
||||||
builder = self.actionDict[action](path=path, handler=self, **paramDict)
|
|
||||||
builder.start()
|
|
||||||
try:
|
|
||||||
builder.build()
|
|
||||||
finally:
|
|
||||||
builder.close()
|
|
||||||
except BuildError as e:
|
|
||||||
self.send_response(e.code)
|
|
||||||
msg = compat_str(e).encode('UTF-8')
|
|
||||||
self.send_header('Content-Type', 'text/plain; charset=UTF-8')
|
|
||||||
self.send_header('Content-Length', len(msg))
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write(msg)
|
|
||||||
else:
|
|
||||||
self.send_response(500, 'Unknown build method "%s"' % action)
|
|
||||||
else:
|
|
||||||
self.send_response(500, 'Malformed URL')
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
|
@ -1,60 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
"""
|
|
||||||
This script employs a VERY basic heuristic ('porn' in webpage.lower()) to check
|
|
||||||
if we are not 'age_limit' tagging some porn site
|
|
||||||
|
|
||||||
A second approach implemented relies on a list of porn domains, to activate it
|
|
||||||
pass the list filename as the only argument
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from test.helper import gettestcases
|
|
||||||
from youtube_dl.utils import compat_urllib_parse_urlparse
|
|
||||||
from youtube_dl.utils import compat_urllib_request
|
|
||||||
|
|
||||||
if len(sys.argv) > 1:
|
|
||||||
METHOD = 'LIST'
|
|
||||||
LIST = open(sys.argv[1]).read().decode('utf8').strip()
|
|
||||||
else:
|
|
||||||
METHOD = 'EURISTIC'
|
|
||||||
|
|
||||||
for test in gettestcases():
|
|
||||||
if METHOD == 'EURISTIC':
|
|
||||||
try:
|
|
||||||
webpage = compat_urllib_request.urlopen(test['url'], timeout=10).read()
|
|
||||||
except Exception:
|
|
||||||
print('\nFail: {0}'.format(test['name']))
|
|
||||||
continue
|
|
||||||
|
|
||||||
webpage = webpage.decode('utf8', 'replace')
|
|
||||||
|
|
||||||
RESULT = 'porn' in webpage.lower()
|
|
||||||
|
|
||||||
elif METHOD == 'LIST':
|
|
||||||
domain = compat_urllib_parse_urlparse(test['url']).netloc
|
|
||||||
if not domain:
|
|
||||||
print('\nFail: {0}'.format(test['name']))
|
|
||||||
continue
|
|
||||||
domain = '.'.join(domain.split('.')[-2:])
|
|
||||||
|
|
||||||
RESULT = ('.' + domain + '\n' in LIST or '\n' + domain + '\n' in LIST)
|
|
||||||
|
|
||||||
if RESULT and ('info_dict' not in test or 'age_limit' not in test['info_dict']
|
|
||||||
or test['info_dict']['age_limit'] != 18):
|
|
||||||
print('\nPotential missing age_limit check: {0}'.format(test['name']))
|
|
||||||
|
|
||||||
elif not RESULT and ('info_dict' in test and 'age_limit' in test['info_dict']
|
|
||||||
and test['info_dict']['age_limit'] == 18):
|
|
||||||
print('\nPotential false negative: {0}'.format(test['name']))
|
|
||||||
|
|
||||||
else:
|
|
||||||
sys.stdout.write('.')
|
|
||||||
sys.stdout.flush()
|
|
||||||
|
|
||||||
print()
|
|
|
@ -1,110 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import json
|
|
||||||
import mimetypes
|
|
||||||
import netrc
|
|
||||||
import optparse
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.compat import (
|
|
||||||
compat_basestring,
|
|
||||||
compat_getpass,
|
|
||||||
compat_print,
|
|
||||||
compat_urllib_request,
|
|
||||||
)
|
|
||||||
from youtube_dl.utils import (
|
|
||||||
make_HTTPS_handler,
|
|
||||||
sanitized_Request,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class GitHubReleaser(object):
|
|
||||||
_API_URL = 'https://api.github.com/repos/ytdl-org/youtube-dl/releases'
|
|
||||||
_UPLOADS_URL = 'https://uploads.github.com/repos/ytdl-org/youtube-dl/releases/%s/assets?name=%s'
|
|
||||||
_NETRC_MACHINE = 'github.com'
|
|
||||||
|
|
||||||
def __init__(self, debuglevel=0):
|
|
||||||
self._init_github_account()
|
|
||||||
https_handler = make_HTTPS_handler({}, debuglevel=debuglevel)
|
|
||||||
self._opener = compat_urllib_request.build_opener(https_handler)
|
|
||||||
|
|
||||||
def _init_github_account(self):
|
|
||||||
try:
|
|
||||||
info = netrc.netrc().authenticators(self._NETRC_MACHINE)
|
|
||||||
if info is not None:
|
|
||||||
self._token = info[2]
|
|
||||||
compat_print('Using GitHub credentials found in .netrc...')
|
|
||||||
return
|
|
||||||
else:
|
|
||||||
compat_print('No GitHub credentials found in .netrc')
|
|
||||||
except (IOError, netrc.NetrcParseError):
|
|
||||||
compat_print('Unable to parse .netrc')
|
|
||||||
self._token = compat_getpass(
|
|
||||||
'Type your GitHub PAT (personal access token) and press [Return]: ')
|
|
||||||
|
|
||||||
def _call(self, req):
|
|
||||||
if isinstance(req, compat_basestring):
|
|
||||||
req = sanitized_Request(req)
|
|
||||||
req.add_header('Authorization', 'token %s' % self._token)
|
|
||||||
response = self._opener.open(req).read().decode('utf-8')
|
|
||||||
return json.loads(response)
|
|
||||||
|
|
||||||
def list_releases(self):
|
|
||||||
return self._call(self._API_URL)
|
|
||||||
|
|
||||||
def create_release(self, tag_name, name=None, body='', draft=False, prerelease=False):
|
|
||||||
data = {
|
|
||||||
'tag_name': tag_name,
|
|
||||||
'target_commitish': 'master',
|
|
||||||
'name': name,
|
|
||||||
'body': body,
|
|
||||||
'draft': draft,
|
|
||||||
'prerelease': prerelease,
|
|
||||||
}
|
|
||||||
req = sanitized_Request(self._API_URL, json.dumps(data).encode('utf-8'))
|
|
||||||
return self._call(req)
|
|
||||||
|
|
||||||
def create_asset(self, release_id, asset):
|
|
||||||
asset_name = os.path.basename(asset)
|
|
||||||
url = self._UPLOADS_URL % (release_id, asset_name)
|
|
||||||
# Our files are small enough to be loaded directly into memory.
|
|
||||||
data = open(asset, 'rb').read()
|
|
||||||
req = sanitized_Request(url, data)
|
|
||||||
mime_type, _ = mimetypes.guess_type(asset_name)
|
|
||||||
req.add_header('Content-Type', mime_type or 'application/octet-stream')
|
|
||||||
return self._call(req)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = optparse.OptionParser(usage='%prog CHANGELOG VERSION BUILDPATH')
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
if len(args) != 3:
|
|
||||||
parser.error('Expected a version and a build directory')
|
|
||||||
|
|
||||||
changelog_file, version, build_path = args
|
|
||||||
|
|
||||||
with io.open(changelog_file, encoding='utf-8') as inf:
|
|
||||||
changelog = inf.read()
|
|
||||||
|
|
||||||
mobj = re.search(r'(?s)version %s\n{2}(.+?)\n{3}' % version, changelog)
|
|
||||||
body = mobj.group(1) if mobj else ''
|
|
||||||
|
|
||||||
releaser = GitHubReleaser()
|
|
||||||
|
|
||||||
new_release = releaser.create_release(
|
|
||||||
version, name='youtube-dl %s' % version, body=body)
|
|
||||||
release_id = new_release['id']
|
|
||||||
|
|
||||||
for asset in os.listdir(build_path):
|
|
||||||
compat_print('Uploading %s...' % asset)
|
|
||||||
releaser.create_asset(release_id, os.path.join(build_path, asset))
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
|
@ -1,5 +0,0 @@
|
||||||
|
|
||||||
{{commands}}
|
|
||||||
|
|
||||||
|
|
||||||
complete --command youtube-dl --arguments ":ytfavorites :ytrecommended :ytsubscriptions :ytwatchlater :ythistory"
|
|
|
@ -1,49 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import optparse
|
|
||||||
import os
|
|
||||||
from os.path import dirname as dirn
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
|
|
||||||
import youtube_dl
|
|
||||||
from youtube_dl.utils import shell_quote
|
|
||||||
|
|
||||||
FISH_COMPLETION_FILE = 'youtube-dl.fish'
|
|
||||||
FISH_COMPLETION_TEMPLATE = 'devscripts/fish-completion.in'
|
|
||||||
|
|
||||||
EXTRA_ARGS = {
|
|
||||||
'recode-video': ['--arguments', 'mp4 flv ogg webm mkv', '--exclusive'],
|
|
||||||
|
|
||||||
# Options that need a file parameter
|
|
||||||
'download-archive': ['--require-parameter'],
|
|
||||||
'cookies': ['--require-parameter'],
|
|
||||||
'load-info': ['--require-parameter'],
|
|
||||||
'batch-file': ['--require-parameter'],
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def build_completion(opt_parser):
|
|
||||||
commands = []
|
|
||||||
|
|
||||||
for group in opt_parser.option_groups:
|
|
||||||
for option in group.option_list:
|
|
||||||
long_option = option.get_opt_string().strip('-')
|
|
||||||
complete_cmd = ['complete', '--command', 'youtube-dl', '--long-option', long_option]
|
|
||||||
if option._short_opts:
|
|
||||||
complete_cmd += ['--short-option', option._short_opts[0].strip('-')]
|
|
||||||
if option.help != optparse.SUPPRESS_HELP:
|
|
||||||
complete_cmd += ['--description', option.help]
|
|
||||||
complete_cmd.extend(EXTRA_ARGS.get(long_option, []))
|
|
||||||
commands.append(shell_quote(complete_cmd))
|
|
||||||
|
|
||||||
with open(FISH_COMPLETION_TEMPLATE) as f:
|
|
||||||
template = f.read()
|
|
||||||
filled_template = template.replace('{{commands}}', '\n'.join(commands))
|
|
||||||
with open(FISH_COMPLETION_FILE, 'w') as f:
|
|
||||||
f.write(filled_template)
|
|
||||||
|
|
||||||
|
|
||||||
parser = youtube_dl.parseOpts()[0]
|
|
||||||
build_completion(parser)
|
|
|
@ -1,43 +0,0 @@
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import codecs
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.utils import intlist_to_bytes
|
|
||||||
from youtube_dl.aes import aes_encrypt, key_expansion
|
|
||||||
|
|
||||||
secret_msg = b'Secret message goes here'
|
|
||||||
|
|
||||||
|
|
||||||
def hex_str(int_list):
|
|
||||||
return codecs.encode(intlist_to_bytes(int_list), 'hex')
|
|
||||||
|
|
||||||
|
|
||||||
def openssl_encode(algo, key, iv):
|
|
||||||
cmd = ['openssl', 'enc', '-e', '-' + algo, '-K', hex_str(key), '-iv', hex_str(iv)]
|
|
||||||
prog = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
|
|
||||||
out, _ = prog.communicate(secret_msg)
|
|
||||||
return out
|
|
||||||
|
|
||||||
|
|
||||||
iv = key = [0x20, 0x15] + 14 * [0]
|
|
||||||
|
|
||||||
r = openssl_encode('aes-128-cbc', key, iv)
|
|
||||||
print('aes_cbc_decrypt')
|
|
||||||
print(repr(r))
|
|
||||||
|
|
||||||
password = key
|
|
||||||
new_key = aes_encrypt(password, key_expansion(password))
|
|
||||||
r = openssl_encode('aes-128-ctr', new_key, iv)
|
|
||||||
print('aes_decrypt_text 16')
|
|
||||||
print(repr(r))
|
|
||||||
|
|
||||||
password = key + 16 * [0]
|
|
||||||
new_key = aes_encrypt(password, key_expansion(password)) * (32 // 16)
|
|
||||||
r = openssl_encode('aes-256-ctr', new_key, iv)
|
|
||||||
print('aes_decrypt_text 32')
|
|
||||||
print(repr(r))
|
|
|
@ -1,43 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import json
|
|
||||||
import sys
|
|
||||||
import hashlib
|
|
||||||
import os.path
|
|
||||||
|
|
||||||
|
|
||||||
if len(sys.argv) <= 1:
|
|
||||||
print('Specify the version number as parameter')
|
|
||||||
sys.exit()
|
|
||||||
version = sys.argv[1]
|
|
||||||
|
|
||||||
with open('update/LATEST_VERSION', 'w') as f:
|
|
||||||
f.write(version)
|
|
||||||
|
|
||||||
versions_info = json.load(open('update/versions.json'))
|
|
||||||
if 'signature' in versions_info:
|
|
||||||
del versions_info['signature']
|
|
||||||
|
|
||||||
new_version = {}
|
|
||||||
|
|
||||||
filenames = {
|
|
||||||
'bin': 'youtube-dl',
|
|
||||||
'exe': 'youtube-dl.exe',
|
|
||||||
'tar': 'youtube-dl-%s.tar.gz' % version}
|
|
||||||
build_dir = os.path.join('..', '..', 'build', version)
|
|
||||||
for key, filename in filenames.items():
|
|
||||||
url = 'https://yt-dl.org/downloads/%s/%s' % (version, filename)
|
|
||||||
fn = os.path.join(build_dir, filename)
|
|
||||||
with open(fn, 'rb') as f:
|
|
||||||
data = f.read()
|
|
||||||
if not data:
|
|
||||||
raise ValueError('File %s is empty!' % fn)
|
|
||||||
sha256sum = hashlib.sha256(data).hexdigest()
|
|
||||||
new_version[key] = (url, sha256sum)
|
|
||||||
|
|
||||||
versions_info['versions'][version] = new_version
|
|
||||||
versions_info['latest'] = version
|
|
||||||
|
|
||||||
with open('update/versions.json', 'w') as jsonf:
|
|
||||||
json.dump(versions_info, jsonf, indent=4, sort_keys=True)
|
|
|
@ -1,22 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import json
|
|
||||||
|
|
||||||
versions_info = json.load(open('update/versions.json'))
|
|
||||||
version = versions_info['latest']
|
|
||||||
version_dict = versions_info['versions'][version]
|
|
||||||
|
|
||||||
# Read template page
|
|
||||||
with open('download.html.in', 'r', encoding='utf-8') as tmplf:
|
|
||||||
template = tmplf.read()
|
|
||||||
|
|
||||||
template = template.replace('@PROGRAM_VERSION@', version)
|
|
||||||
template = template.replace('@PROGRAM_URL@', version_dict['bin'][0])
|
|
||||||
template = template.replace('@PROGRAM_SHA256SUM@', version_dict['bin'][1])
|
|
||||||
template = template.replace('@EXE_URL@', version_dict['exe'][0])
|
|
||||||
template = template.replace('@EXE_SHA256SUM@', version_dict['exe'][1])
|
|
||||||
template = template.replace('@TAR_URL@', version_dict['tar'][0])
|
|
||||||
template = template.replace('@TAR_SHA256SUM@', version_dict['tar'][1])
|
|
||||||
with open('download.html', 'w', encoding='utf-8') as dlf:
|
|
||||||
dlf.write(template)
|
|
|
@ -1,34 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
from __future__ import unicode_literals, with_statement
|
|
||||||
|
|
||||||
import rsa
|
|
||||||
import json
|
|
||||||
from binascii import hexlify
|
|
||||||
|
|
||||||
try:
|
|
||||||
input = raw_input
|
|
||||||
except NameError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
versions_info = json.load(open('update/versions.json'))
|
|
||||||
if 'signature' in versions_info:
|
|
||||||
del versions_info['signature']
|
|
||||||
|
|
||||||
print('Enter the PKCS1 private key, followed by a blank line:')
|
|
||||||
privkey = b''
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
line = input()
|
|
||||||
except EOFError:
|
|
||||||
break
|
|
||||||
if line == '':
|
|
||||||
break
|
|
||||||
privkey += line.encode('ascii') + b'\n'
|
|
||||||
privkey = rsa.PrivateKey.load_pkcs1(privkey)
|
|
||||||
|
|
||||||
signature = hexlify(rsa.pkcs1.sign(json.dumps(versions_info, sort_keys=True).encode('utf-8'), privkey, 'SHA-256')).decode()
|
|
||||||
print('signature: ' + signature)
|
|
||||||
|
|
||||||
versions_info['signature'] = signature
|
|
||||||
with open('update/versions.json', 'w') as versionsf:
|
|
||||||
json.dump(versions_info, versionsf, indent=4, sort_keys=True)
|
|
|
@ -1,21 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import with_statement, unicode_literals
|
|
||||||
|
|
||||||
import datetime
|
|
||||||
import glob
|
|
||||||
import io # For Python 2 compatibility
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
|
|
||||||
year = str(datetime.datetime.now().year)
|
|
||||||
for fn in glob.glob('*.html*'):
|
|
||||||
with io.open(fn, encoding='utf-8') as f:
|
|
||||||
content = f.read()
|
|
||||||
newc = re.sub(r'(?P<copyright>Copyright © 2011-)(?P<year>[0-9]{4})', 'Copyright © 2011-' + year, content)
|
|
||||||
if content != newc:
|
|
||||||
tmpFn = fn + '.part'
|
|
||||||
with io.open(tmpFn, 'wt', encoding='utf-8') as outf:
|
|
||||||
outf.write(newc)
|
|
||||||
os.rename(tmpFn, fn)
|
|
|
@ -1,76 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import datetime
|
|
||||||
import io
|
|
||||||
import json
|
|
||||||
import textwrap
|
|
||||||
|
|
||||||
|
|
||||||
atom_template = textwrap.dedent("""\
|
|
||||||
<?xml version="1.0" encoding="utf-8"?>
|
|
||||||
<feed xmlns="http://www.w3.org/2005/Atom">
|
|
||||||
<link rel="self" href="http://ytdl-org.github.io/youtube-dl/update/releases.atom" />
|
|
||||||
<title>youtube-dl releases</title>
|
|
||||||
<id>https://yt-dl.org/feed/youtube-dl-updates-feed</id>
|
|
||||||
<updated>@TIMESTAMP@</updated>
|
|
||||||
@ENTRIES@
|
|
||||||
</feed>""")
|
|
||||||
|
|
||||||
entry_template = textwrap.dedent("""
|
|
||||||
<entry>
|
|
||||||
<id>https://yt-dl.org/feed/youtube-dl-updates-feed/youtube-dl-@VERSION@</id>
|
|
||||||
<title>New version @VERSION@</title>
|
|
||||||
<link href="http://ytdl-org.github.io/youtube-dl" />
|
|
||||||
<content type="xhtml">
|
|
||||||
<div xmlns="http://www.w3.org/1999/xhtml">
|
|
||||||
Downloads available at <a href="https://yt-dl.org/downloads/@VERSION@/">https://yt-dl.org/downloads/@VERSION@/</a>
|
|
||||||
</div>
|
|
||||||
</content>
|
|
||||||
<author>
|
|
||||||
<name>The youtube-dl maintainers</name>
|
|
||||||
</author>
|
|
||||||
<updated>@TIMESTAMP@</updated>
|
|
||||||
</entry>
|
|
||||||
""")
|
|
||||||
|
|
||||||
now = datetime.datetime.now()
|
|
||||||
now_iso = now.isoformat() + 'Z'
|
|
||||||
|
|
||||||
atom_template = atom_template.replace('@TIMESTAMP@', now_iso)
|
|
||||||
|
|
||||||
versions_info = json.load(open('update/versions.json'))
|
|
||||||
versions = list(versions_info['versions'].keys())
|
|
||||||
versions.sort()
|
|
||||||
|
|
||||||
entries = []
|
|
||||||
for v in versions:
|
|
||||||
fields = v.split('.')
|
|
||||||
year, month, day = map(int, fields[:3])
|
|
||||||
faked = 0
|
|
||||||
patchlevel = 0
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
datetime.date(year, month, day)
|
|
||||||
except ValueError:
|
|
||||||
day -= 1
|
|
||||||
faked += 1
|
|
||||||
assert day > 0
|
|
||||||
continue
|
|
||||||
break
|
|
||||||
if len(fields) >= 4:
|
|
||||||
try:
|
|
||||||
patchlevel = int(fields[3])
|
|
||||||
except ValueError:
|
|
||||||
patchlevel = 1
|
|
||||||
timestamp = '%04d-%02d-%02dT00:%02d:%02dZ' % (year, month, day, faked, patchlevel)
|
|
||||||
|
|
||||||
entry = entry_template.replace('@TIMESTAMP@', timestamp)
|
|
||||||
entry = entry.replace('@VERSION@', v)
|
|
||||||
entries.append(entry)
|
|
||||||
|
|
||||||
entries_str = textwrap.indent(''.join(entries), '\t')
|
|
||||||
atom_template = atom_template.replace('@ENTRIES@', entries_str)
|
|
||||||
|
|
||||||
with io.open('update/releases.atom', 'w', encoding='utf-8') as atom_file:
|
|
||||||
atom_file.write(atom_template)
|
|
|
@ -1,37 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import textwrap
|
|
||||||
|
|
||||||
# We must be able to import youtube_dl
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
|
|
||||||
|
|
||||||
import youtube_dl
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
with open('supportedsites.html.in', 'r', encoding='utf-8') as tmplf:
|
|
||||||
template = tmplf.read()
|
|
||||||
|
|
||||||
ie_htmls = []
|
|
||||||
for ie in youtube_dl.list_extractors(age_limit=None):
|
|
||||||
ie_html = '<b>{}</b>'.format(ie.IE_NAME)
|
|
||||||
ie_desc = getattr(ie, 'IE_DESC', None)
|
|
||||||
if ie_desc is False:
|
|
||||||
continue
|
|
||||||
elif ie_desc is not None:
|
|
||||||
ie_html += ': {}'.format(ie.IE_DESC)
|
|
||||||
if not ie.working():
|
|
||||||
ie_html += ' (Currently broken)'
|
|
||||||
ie_htmls.append('<li>{}</li>'.format(ie_html))
|
|
||||||
|
|
||||||
template = template.replace('@SITES@', textwrap.indent('\n'.join(ie_htmls), '\t'))
|
|
||||||
|
|
||||||
with open('supportedsites.html', 'w', encoding='utf-8') as sitesf:
|
|
||||||
sitesf.write(template)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
|
@ -1,5 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
wget http://central.maven.org/maven2/org/python/jython-installer/2.7.1/jython-installer-2.7.1.jar
|
|
||||||
java -jar jython-installer-2.7.1.jar -s -d "$HOME/jython"
|
|
||||||
$HOME/jython/bin/jython -m pip install nose
|
|
|
@ -1,19 +0,0 @@
|
||||||
# coding: utf-8
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import re
|
|
||||||
|
|
||||||
|
|
||||||
class LazyLoadExtractor(object):
|
|
||||||
_module = None
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def ie_key(cls):
|
|
||||||
return cls.__name__[:-2]
|
|
||||||
|
|
||||||
def __new__(cls, *args, **kwargs):
|
|
||||||
mod = __import__(cls._module, fromlist=(cls.__name__,))
|
|
||||||
real_cls = getattr(mod, cls.__name__)
|
|
||||||
instance = real_cls.__new__(real_cls)
|
|
||||||
instance.__init__(*args, **kwargs)
|
|
||||||
return instance
|
|
|
@ -1,33 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import optparse
|
|
||||||
import re
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = optparse.OptionParser(usage='%prog INFILE OUTFILE')
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
if len(args) != 2:
|
|
||||||
parser.error('Expected an input and an output filename')
|
|
||||||
|
|
||||||
infile, outfile = args
|
|
||||||
|
|
||||||
with io.open(infile, encoding='utf-8') as inf:
|
|
||||||
readme = inf.read()
|
|
||||||
|
|
||||||
bug_text = re.search(
|
|
||||||
r'(?s)#\s*BUGS\s*[^\n]*\s*(.*?)#\s*COPYRIGHT', readme).group(1)
|
|
||||||
dev_text = re.search(
|
|
||||||
r'(?s)(#\s*DEVELOPER INSTRUCTIONS.*?)#\s*EMBEDDING YOUTUBE-DL',
|
|
||||||
readme).group(1)
|
|
||||||
|
|
||||||
out = bug_text + dev_text
|
|
||||||
|
|
||||||
with io.open(outfile, 'w', encoding='utf-8') as outf:
|
|
||||||
outf.write(out)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
|
@ -1,29 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import optparse
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = optparse.OptionParser(usage='%prog INFILE OUTFILE')
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
if len(args) != 2:
|
|
||||||
parser.error('Expected an input and an output filename')
|
|
||||||
|
|
||||||
infile, outfile = args
|
|
||||||
|
|
||||||
with io.open(infile, encoding='utf-8') as inf:
|
|
||||||
issue_template_tmpl = inf.read()
|
|
||||||
|
|
||||||
# Get the version from youtube_dl/version.py without importing the package
|
|
||||||
exec(compile(open('youtube_dl/version.py').read(),
|
|
||||||
'youtube_dl/version.py', 'exec'))
|
|
||||||
|
|
||||||
out = issue_template_tmpl % {'version': locals()['__version__']}
|
|
||||||
|
|
||||||
with io.open(outfile, 'w', encoding='utf-8') as outf:
|
|
||||||
outf.write(out)
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
|
@ -1,100 +0,0 @@
|
||||||
from __future__ import unicode_literals, print_function
|
|
||||||
|
|
||||||
from inspect import getsource
|
|
||||||
import io
|
|
||||||
import os
|
|
||||||
from os.path import dirname as dirn
|
|
||||||
import sys
|
|
||||||
|
|
||||||
print('WARNING: Lazy loading extractors is an experimental feature that may not always work', file=sys.stderr)
|
|
||||||
|
|
||||||
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
|
|
||||||
|
|
||||||
lazy_extractors_filename = sys.argv[1]
|
|
||||||
if os.path.exists(lazy_extractors_filename):
|
|
||||||
os.remove(lazy_extractors_filename)
|
|
||||||
|
|
||||||
from youtube_dl.extractor import _ALL_CLASSES
|
|
||||||
from youtube_dl.extractor.common import InfoExtractor, SearchInfoExtractor
|
|
||||||
|
|
||||||
with open('devscripts/lazy_load_template.py', 'rt') as f:
|
|
||||||
module_template = f.read()
|
|
||||||
|
|
||||||
module_contents = [
|
|
||||||
module_template + '\n' + getsource(InfoExtractor.suitable) + '\n',
|
|
||||||
'class LazyLoadSearchExtractor(LazyLoadExtractor):\n pass\n']
|
|
||||||
|
|
||||||
ie_template = '''
|
|
||||||
class {name}({bases}):
|
|
||||||
_VALID_URL = {valid_url!r}
|
|
||||||
_module = '{module}'
|
|
||||||
'''
|
|
||||||
|
|
||||||
make_valid_template = '''
|
|
||||||
@classmethod
|
|
||||||
def _make_valid_url(cls):
|
|
||||||
return {valid_url!r}
|
|
||||||
'''
|
|
||||||
|
|
||||||
|
|
||||||
def get_base_name(base):
|
|
||||||
if base is InfoExtractor:
|
|
||||||
return 'LazyLoadExtractor'
|
|
||||||
elif base is SearchInfoExtractor:
|
|
||||||
return 'LazyLoadSearchExtractor'
|
|
||||||
else:
|
|
||||||
return base.__name__
|
|
||||||
|
|
||||||
|
|
||||||
def build_lazy_ie(ie, name):
|
|
||||||
valid_url = getattr(ie, '_VALID_URL', None)
|
|
||||||
s = ie_template.format(
|
|
||||||
name=name,
|
|
||||||
bases=', '.join(map(get_base_name, ie.__bases__)),
|
|
||||||
valid_url=valid_url,
|
|
||||||
module=ie.__module__)
|
|
||||||
if ie.suitable.__func__ is not InfoExtractor.suitable.__func__:
|
|
||||||
s += '\n' + getsource(ie.suitable)
|
|
||||||
if hasattr(ie, '_make_valid_url'):
|
|
||||||
# search extractors
|
|
||||||
s += make_valid_template.format(valid_url=ie._make_valid_url())
|
|
||||||
return s
|
|
||||||
|
|
||||||
|
|
||||||
# find the correct sorting and add the required base classes so that sublcasses
|
|
||||||
# can be correctly created
|
|
||||||
classes = _ALL_CLASSES[:-1]
|
|
||||||
ordered_cls = []
|
|
||||||
while classes:
|
|
||||||
for c in classes[:]:
|
|
||||||
bases = set(c.__bases__) - set((object, InfoExtractor, SearchInfoExtractor))
|
|
||||||
stop = False
|
|
||||||
for b in bases:
|
|
||||||
if b not in classes and b not in ordered_cls:
|
|
||||||
if b.__name__ == 'GenericIE':
|
|
||||||
exit()
|
|
||||||
classes.insert(0, b)
|
|
||||||
stop = True
|
|
||||||
if stop:
|
|
||||||
break
|
|
||||||
if all(b in ordered_cls for b in bases):
|
|
||||||
ordered_cls.append(c)
|
|
||||||
classes.remove(c)
|
|
||||||
break
|
|
||||||
ordered_cls.append(_ALL_CLASSES[-1])
|
|
||||||
|
|
||||||
names = []
|
|
||||||
for ie in ordered_cls:
|
|
||||||
name = ie.__name__
|
|
||||||
src = build_lazy_ie(ie, name)
|
|
||||||
module_contents.append(src)
|
|
||||||
if ie in _ALL_CLASSES:
|
|
||||||
names.append(name)
|
|
||||||
|
|
||||||
module_contents.append(
|
|
||||||
'_ALL_CLASSES = [{0}]'.format(', '.join(names)))
|
|
||||||
|
|
||||||
module_src = '\n'.join(module_contents) + '\n'
|
|
||||||
|
|
||||||
with io.open(lazy_extractors_filename, 'wt', encoding='utf-8') as f:
|
|
||||||
f.write(module_src)
|
|
|
@ -1,26 +0,0 @@
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import sys
|
|
||||||
import re
|
|
||||||
|
|
||||||
README_FILE = 'README.md'
|
|
||||||
helptext = sys.stdin.read()
|
|
||||||
|
|
||||||
if isinstance(helptext, bytes):
|
|
||||||
helptext = helptext.decode('utf-8')
|
|
||||||
|
|
||||||
with io.open(README_FILE, encoding='utf-8') as f:
|
|
||||||
oldreadme = f.read()
|
|
||||||
|
|
||||||
header = oldreadme[:oldreadme.index('# OPTIONS')]
|
|
||||||
footer = oldreadme[oldreadme.index('# CONFIGURATION'):]
|
|
||||||
|
|
||||||
options = helptext[helptext.index(' General Options:') + 19:]
|
|
||||||
options = re.sub(r'(?m)^ (\w.+)$', r'## \1', options)
|
|
||||||
options = '# OPTIONS\n' + options + '\n'
|
|
||||||
|
|
||||||
with io.open(README_FILE, 'w', encoding='utf-8') as f:
|
|
||||||
f.write(header)
|
|
||||||
f.write(options)
|
|
||||||
f.write(footer)
|
|
|
@ -1,46 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import optparse
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
|
|
||||||
# Import youtube_dl
|
|
||||||
ROOT_DIR = os.path.join(os.path.dirname(__file__), '..')
|
|
||||||
sys.path.insert(0, ROOT_DIR)
|
|
||||||
import youtube_dl
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = optparse.OptionParser(usage='%prog OUTFILE.md')
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
if len(args) != 1:
|
|
||||||
parser.error('Expected an output filename')
|
|
||||||
|
|
||||||
outfile, = args
|
|
||||||
|
|
||||||
def gen_ies_md(ies):
|
|
||||||
for ie in ies:
|
|
||||||
ie_md = '**{0}**'.format(ie.IE_NAME)
|
|
||||||
ie_desc = getattr(ie, 'IE_DESC', None)
|
|
||||||
if ie_desc is False:
|
|
||||||
continue
|
|
||||||
if ie_desc is not None:
|
|
||||||
ie_md += ': {0}'.format(ie.IE_DESC)
|
|
||||||
if not ie.working():
|
|
||||||
ie_md += ' (Currently broken)'
|
|
||||||
yield ie_md
|
|
||||||
|
|
||||||
ies = sorted(youtube_dl.gen_extractors(), key=lambda i: i.IE_NAME.lower())
|
|
||||||
out = '# Supported sites\n' + ''.join(
|
|
||||||
' - ' + md + '\n'
|
|
||||||
for md in gen_ies_md(ies))
|
|
||||||
|
|
||||||
with io.open(outfile, 'w', encoding='utf-8') as outf:
|
|
||||||
outf.write(out)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
|
@ -1,6 +0,0 @@
|
||||||
|
|
||||||
# source this file in your shell to get a POSIX locale (which will break many programs, but that's kind of the point)
|
|
||||||
|
|
||||||
export LC_ALL=POSIX
|
|
||||||
export LANG=POSIX
|
|
||||||
export LANGUAGE=POSIX
|
|
|
@ -1,79 +0,0 @@
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import io
|
|
||||||
import optparse
|
|
||||||
import os.path
|
|
||||||
import re
|
|
||||||
|
|
||||||
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
|
||||||
README_FILE = os.path.join(ROOT_DIR, 'README.md')
|
|
||||||
|
|
||||||
PREFIX = r'''%YOUTUBE-DL(1)
|
|
||||||
|
|
||||||
# NAME
|
|
||||||
|
|
||||||
youtube\-dl \- download videos from youtube.com or other video platforms
|
|
||||||
|
|
||||||
# SYNOPSIS
|
|
||||||
|
|
||||||
**youtube-dl** \[OPTIONS\] URL [URL...]
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = optparse.OptionParser(usage='%prog OUTFILE.md')
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
if len(args) != 1:
|
|
||||||
parser.error('Expected an output filename')
|
|
||||||
|
|
||||||
outfile, = args
|
|
||||||
|
|
||||||
with io.open(README_FILE, encoding='utf-8') as f:
|
|
||||||
readme = f.read()
|
|
||||||
|
|
||||||
readme = re.sub(r'(?s)^.*?(?=# DESCRIPTION)', '', readme)
|
|
||||||
readme = re.sub(r'\s+youtube-dl \[OPTIONS\] URL \[URL\.\.\.\]', '', readme)
|
|
||||||
readme = PREFIX + readme
|
|
||||||
|
|
||||||
readme = filter_options(readme)
|
|
||||||
|
|
||||||
with io.open(outfile, 'w', encoding='utf-8') as outf:
|
|
||||||
outf.write(readme)
|
|
||||||
|
|
||||||
|
|
||||||
def filter_options(readme):
|
|
||||||
ret = ''
|
|
||||||
in_options = False
|
|
||||||
for line in readme.split('\n'):
|
|
||||||
if line.startswith('# '):
|
|
||||||
if line[2:].startswith('OPTIONS'):
|
|
||||||
in_options = True
|
|
||||||
else:
|
|
||||||
in_options = False
|
|
||||||
|
|
||||||
if in_options:
|
|
||||||
if line.lstrip().startswith('-'):
|
|
||||||
split = re.split(r'\s{2,}', line.lstrip())
|
|
||||||
# Description string may start with `-` as well. If there is
|
|
||||||
# only one piece then it's a description bit not an option.
|
|
||||||
if len(split) > 1:
|
|
||||||
option, description = split
|
|
||||||
split_option = option.split(' ')
|
|
||||||
|
|
||||||
if not split_option[-1].startswith('-'): # metavar
|
|
||||||
option = ' '.join(split_option[:-1] + ['*%s*' % split_option[-1]])
|
|
||||||
|
|
||||||
# Pandoc's definition_lists. See http://pandoc.org/README.html
|
|
||||||
# for more information.
|
|
||||||
ret += '\n%s\n: %s\n' % (option, description)
|
|
||||||
continue
|
|
||||||
ret += line.lstrip() + '\n'
|
|
||||||
else:
|
|
||||||
ret += line + '\n'
|
|
||||||
|
|
||||||
return ret
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
|
@ -1,141 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# IMPORTANT: the following assumptions are made
|
|
||||||
# * the GH repo is on the origin remote
|
|
||||||
# * the gh-pages branch is named so locally
|
|
||||||
# * the git config user.signingkey is properly set
|
|
||||||
|
|
||||||
# You will need
|
|
||||||
# pip install coverage nose rsa wheel
|
|
||||||
|
|
||||||
# TODO
|
|
||||||
# release notes
|
|
||||||
# make hash on local files
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
skip_tests=true
|
|
||||||
gpg_sign_commits=""
|
|
||||||
buildserver='localhost:8142'
|
|
||||||
|
|
||||||
while true
|
|
||||||
do
|
|
||||||
case "$1" in
|
|
||||||
--run-tests)
|
|
||||||
skip_tests=false
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--gpg-sign-commits|-S)
|
|
||||||
gpg_sign_commits="-S"
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--buildserver)
|
|
||||||
buildserver="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--*)
|
|
||||||
echo "ERROR: unknown option $1"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
break
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ -z "$1" ]; then echo "ERROR: specify version number like this: $0 1994.09.06"; exit 1; fi
|
|
||||||
version="$1"
|
|
||||||
major_version=$(echo "$version" | sed -n 's#^\([0-9]*\.[0-9]*\.[0-9]*\).*#\1#p')
|
|
||||||
if test "$major_version" '!=' "$(date '+%Y.%m.%d')"; then
|
|
||||||
echo "$version does not start with today's date!"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -z "`git tag | grep "$version"`" ]; then echo 'ERROR: version already present'; exit 1; fi
|
|
||||||
if [ ! -z "`git status --porcelain | grep -v CHANGELOG`" ]; then echo 'ERROR: the working directory is not clean; commit or stash changes'; exit 1; fi
|
|
||||||
useless_files=$(find youtube_dl -type f -not -name '*.py')
|
|
||||||
if [ ! -z "$useless_files" ]; then echo "ERROR: Non-.py files in youtube_dl: $useless_files"; exit 1; fi
|
|
||||||
if [ ! -f "updates_key.pem" ]; then echo 'ERROR: updates_key.pem missing'; exit 1; fi
|
|
||||||
if ! type pandoc >/dev/null 2>/dev/null; then echo 'ERROR: pandoc is missing'; exit 1; fi
|
|
||||||
if ! python3 -c 'import rsa' 2>/dev/null; then echo 'ERROR: python3-rsa is missing'; exit 1; fi
|
|
||||||
if ! python3 -c 'import wheel' 2>/dev/null; then echo 'ERROR: wheel is missing'; exit 1; fi
|
|
||||||
|
|
||||||
read -p "Is ChangeLog up to date? (y/n) " -n 1
|
|
||||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then exit 1; fi
|
|
||||||
|
|
||||||
/bin/echo -e "\n### First of all, testing..."
|
|
||||||
make clean
|
|
||||||
if $skip_tests ; then
|
|
||||||
echo 'SKIPPING TESTS'
|
|
||||||
else
|
|
||||||
nosetests --verbose --with-coverage --cover-package=youtube_dl --cover-html test --stop || exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
/bin/echo -e "\n### Changing version in version.py..."
|
|
||||||
sed -i "s/__version__ = '.*'/__version__ = '$version'/" youtube_dl/version.py
|
|
||||||
|
|
||||||
/bin/echo -e "\n### Changing version in ChangeLog..."
|
|
||||||
sed -i "s/<unreleased>/$version/" ChangeLog
|
|
||||||
|
|
||||||
/bin/echo -e "\n### Committing documentation, templates and youtube_dl/version.py..."
|
|
||||||
make README.md CONTRIBUTING.md issuetemplates supportedsites
|
|
||||||
git add README.md CONTRIBUTING.md .github/ISSUE_TEMPLATE/1_broken_site.md .github/ISSUE_TEMPLATE/2_site_support_request.md .github/ISSUE_TEMPLATE/3_site_feature_request.md .github/ISSUE_TEMPLATE/4_bug_report.md .github/ISSUE_TEMPLATE/5_feature_request.md .github/ISSUE_TEMPLATE/6_question.md docs/supportedsites.md youtube_dl/version.py ChangeLog
|
|
||||||
git commit $gpg_sign_commits -m "release $version"
|
|
||||||
|
|
||||||
/bin/echo -e "\n### Now tagging, signing and pushing..."
|
|
||||||
git tag -s -m "Release $version" "$version"
|
|
||||||
git show "$version"
|
|
||||||
read -p "Is it good, can I push? (y/n) " -n 1
|
|
||||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then exit 1; fi
|
|
||||||
echo
|
|
||||||
MASTER=$(git rev-parse --abbrev-ref HEAD)
|
|
||||||
git push origin $MASTER:master
|
|
||||||
git push origin "$version"
|
|
||||||
|
|
||||||
/bin/echo -e "\n### OK, now it is time to build the binaries..."
|
|
||||||
REV=$(git rev-parse HEAD)
|
|
||||||
make youtube-dl youtube-dl.tar.gz
|
|
||||||
read -p "VM running? (y/n) " -n 1
|
|
||||||
wget "http://$buildserver/build/ytdl-org/youtube-dl/youtube-dl.exe?rev=$REV" -O youtube-dl.exe
|
|
||||||
mkdir -p "build/$version"
|
|
||||||
mv youtube-dl youtube-dl.exe "build/$version"
|
|
||||||
mv youtube-dl.tar.gz "build/$version/youtube-dl-$version.tar.gz"
|
|
||||||
RELEASE_FILES="youtube-dl youtube-dl.exe youtube-dl-$version.tar.gz"
|
|
||||||
(cd build/$version/ && md5sum $RELEASE_FILES > MD5SUMS)
|
|
||||||
(cd build/$version/ && sha1sum $RELEASE_FILES > SHA1SUMS)
|
|
||||||
(cd build/$version/ && sha256sum $RELEASE_FILES > SHA2-256SUMS)
|
|
||||||
(cd build/$version/ && sha512sum $RELEASE_FILES > SHA2-512SUMS)
|
|
||||||
|
|
||||||
/bin/echo -e "\n### Signing and uploading the new binaries to GitHub..."
|
|
||||||
for f in $RELEASE_FILES; do gpg --passphrase-repeat 5 --detach-sig "build/$version/$f"; done
|
|
||||||
|
|
||||||
ROOT=$(pwd)
|
|
||||||
python devscripts/create-github-release.py ChangeLog $version "$ROOT/build/$version"
|
|
||||||
|
|
||||||
ssh ytdl@yt-dl.org "sh html/update_latest.sh $version"
|
|
||||||
|
|
||||||
/bin/echo -e "\n### Now switching to gh-pages..."
|
|
||||||
git clone --branch gh-pages --single-branch . build/gh-pages
|
|
||||||
(
|
|
||||||
set -e
|
|
||||||
ORIGIN_URL=$(git config --get remote.origin.url)
|
|
||||||
cd build/gh-pages
|
|
||||||
"$ROOT/devscripts/gh-pages/add-version.py" $version
|
|
||||||
"$ROOT/devscripts/gh-pages/update-feed.py"
|
|
||||||
"$ROOT/devscripts/gh-pages/sign-versions.py" < "$ROOT/updates_key.pem"
|
|
||||||
"$ROOT/devscripts/gh-pages/generate-download.py"
|
|
||||||
"$ROOT/devscripts/gh-pages/update-copyright.py"
|
|
||||||
"$ROOT/devscripts/gh-pages/update-sites.py"
|
|
||||||
git add *.html *.html.in update
|
|
||||||
git commit $gpg_sign_commits -m "release $version"
|
|
||||||
git push "$ROOT" gh-pages
|
|
||||||
git push "$ORIGIN_URL" gh-pages
|
|
||||||
)
|
|
||||||
rm -rf build
|
|
||||||
|
|
||||||
make pypi-files
|
|
||||||
echo "Uploading to PyPi ..."
|
|
||||||
python setup.py sdist bdist_wheel upload
|
|
||||||
make clean
|
|
||||||
|
|
||||||
/bin/echo -e "\n### DONE!"
|
|
|
@ -1,22 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Keep this list in sync with the `offlinetest` target in Makefile
|
|
||||||
DOWNLOAD_TESTS="age_restriction|download|iqiyi_sdk_interpreter|socks|subtitles|write_annotations|youtube_lists|youtube_signature"
|
|
||||||
|
|
||||||
test_set=""
|
|
||||||
multiprocess_args=""
|
|
||||||
|
|
||||||
case "$YTDL_TEST_SET" in
|
|
||||||
core)
|
|
||||||
test_set="-I test_($DOWNLOAD_TESTS)\.py"
|
|
||||||
;;
|
|
||||||
download)
|
|
||||||
test_set="-I test_(?!$DOWNLOAD_TESTS).+\.py"
|
|
||||||
multiprocess_args="--processes=4 --process-timeout=540"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
break
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
nosetests test --verbose $test_set $multiprocess_args
|
|
|
@ -1,47 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import itertools
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.compat import (
|
|
||||||
compat_print,
|
|
||||||
compat_urllib_request,
|
|
||||||
)
|
|
||||||
from youtube_dl.utils import format_bytes
|
|
||||||
|
|
||||||
|
|
||||||
def format_size(bytes):
|
|
||||||
return '%s (%d bytes)' % (format_bytes(bytes), bytes)
|
|
||||||
|
|
||||||
|
|
||||||
total_bytes = 0
|
|
||||||
|
|
||||||
for page in itertools.count(1):
|
|
||||||
releases = json.loads(compat_urllib_request.urlopen(
|
|
||||||
'https://api.github.com/repos/ytdl-org/youtube-dl/releases?page=%s' % page
|
|
||||||
).read().decode('utf-8'))
|
|
||||||
|
|
||||||
if not releases:
|
|
||||||
break
|
|
||||||
|
|
||||||
for release in releases:
|
|
||||||
compat_print(release['name'])
|
|
||||||
for asset in release['assets']:
|
|
||||||
asset_name = asset['name']
|
|
||||||
total_bytes += asset['download_count'] * asset['size']
|
|
||||||
if all(not re.match(p, asset_name) for p in (
|
|
||||||
r'^youtube-dl$',
|
|
||||||
r'^youtube-dl-\d{4}\.\d{2}\.\d{2}(?:\.\d+)?\.tar\.gz$',
|
|
||||||
r'^youtube-dl\.exe$')):
|
|
||||||
continue
|
|
||||||
compat_print(
|
|
||||||
' %s size: %s downloads: %d'
|
|
||||||
% (asset_name, format_size(asset['size']), asset['download_count']))
|
|
||||||
|
|
||||||
compat_print('total downloads traffic: %s' % format_size(total_bytes))
|
|
|
@ -1,56 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Run with as parameter a setup.py that works in the current directory
|
|
||||||
# e.g. no os.chdir()
|
|
||||||
# It will run twice, the first time will crash
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
SCRIPT_DIR="$( cd "$( dirname "$0" )" && pwd )"
|
|
||||||
|
|
||||||
if [ ! -d wine-py2exe ]; then
|
|
||||||
|
|
||||||
sudo apt-get install wine1.3 axel bsdiff
|
|
||||||
|
|
||||||
mkdir wine-py2exe
|
|
||||||
cd wine-py2exe
|
|
||||||
export WINEPREFIX=`pwd`
|
|
||||||
|
|
||||||
axel -a "http://www.python.org/ftp/python/2.7/python-2.7.msi"
|
|
||||||
axel -a "http://downloads.sourceforge.net/project/py2exe/py2exe/0.6.9/py2exe-0.6.9.win32-py2.7.exe"
|
|
||||||
#axel -a "http://winetricks.org/winetricks"
|
|
||||||
|
|
||||||
# http://appdb.winehq.org/objectManager.php?sClass=version&iId=21957
|
|
||||||
echo "Follow python setup on screen"
|
|
||||||
wine msiexec /i python-2.7.msi
|
|
||||||
|
|
||||||
echo "Follow py2exe setup on screen"
|
|
||||||
wine py2exe-0.6.9.win32-py2.7.exe
|
|
||||||
|
|
||||||
#echo "Follow Microsoft Visual C++ 2008 Redistributable Package setup on screen"
|
|
||||||
#bash winetricks vcrun2008
|
|
||||||
|
|
||||||
rm py2exe-0.6.9.win32-py2.7.exe
|
|
||||||
rm python-2.7.msi
|
|
||||||
#rm winetricks
|
|
||||||
|
|
||||||
# http://bugs.winehq.org/show_bug.cgi?id=3591
|
|
||||||
|
|
||||||
mv drive_c/Python27/Lib/site-packages/py2exe/run.exe drive_c/Python27/Lib/site-packages/py2exe/run.exe.backup
|
|
||||||
bspatch drive_c/Python27/Lib/site-packages/py2exe/run.exe.backup drive_c/Python27/Lib/site-packages/py2exe/run.exe "$SCRIPT_DIR/SizeOfImage.patch"
|
|
||||||
mv drive_c/Python27/Lib/site-packages/py2exe/run_w.exe drive_c/Python27/Lib/site-packages/py2exe/run_w.exe.backup
|
|
||||||
bspatch drive_c/Python27/Lib/site-packages/py2exe/run_w.exe.backup drive_c/Python27/Lib/site-packages/py2exe/run_w.exe "$SCRIPT_DIR/SizeOfImage_w.patch"
|
|
||||||
|
|
||||||
cd -
|
|
||||||
|
|
||||||
else
|
|
||||||
|
|
||||||
export WINEPREFIX="$( cd wine-py2exe && pwd )"
|
|
||||||
|
|
||||||
fi
|
|
||||||
|
|
||||||
wine "C:\\Python27\\python.exe" "$1" py2exe > "py2exe.log" 2>&1 || true
|
|
||||||
echo '# Copying python27.dll' >> "py2exe.log"
|
|
||||||
cp "$WINEPREFIX/drive_c/windows/system32/python27.dll" build/bdist.win32/winexe/bundle-2.7/
|
|
||||||
wine "C:\\Python27\\python.exe" "$1" py2exe >> "py2exe.log" 2>&1
|
|
||||||
|
|
|
@ -1,28 +0,0 @@
|
||||||
#compdef youtube-dl
|
|
||||||
|
|
||||||
__youtube_dl() {
|
|
||||||
local curcontext="$curcontext" fileopts diropts cur prev
|
|
||||||
typeset -A opt_args
|
|
||||||
fileopts="{{fileopts}}"
|
|
||||||
diropts="{{diropts}}"
|
|
||||||
cur=$words[CURRENT]
|
|
||||||
case $cur in
|
|
||||||
:)
|
|
||||||
_arguments '*: :(::ytfavorites ::ytrecommended ::ytsubscriptions ::ytwatchlater ::ythistory)'
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
prev=$words[CURRENT-1]
|
|
||||||
if [[ ${prev} =~ ${fileopts} ]]; then
|
|
||||||
_path_files
|
|
||||||
elif [[ ${prev} =~ ${diropts} ]]; then
|
|
||||||
_path_files -/
|
|
||||||
elif [[ ${prev} == "--recode-video" ]]; then
|
|
||||||
_arguments '*: :(mp4 flv ogg webm mkv)'
|
|
||||||
else
|
|
||||||
_arguments '*: :({{flags}})'
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
__youtube_dl
|
|
|
@ -1,49 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import os
|
|
||||||
from os.path import dirname as dirn
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
|
|
||||||
import youtube_dl
|
|
||||||
|
|
||||||
ZSH_COMPLETION_FILE = "youtube-dl.zsh"
|
|
||||||
ZSH_COMPLETION_TEMPLATE = "devscripts/zsh-completion.in"
|
|
||||||
|
|
||||||
|
|
||||||
def build_completion(opt_parser):
|
|
||||||
opts = [opt for group in opt_parser.option_groups
|
|
||||||
for opt in group.option_list]
|
|
||||||
opts_file = [opt for opt in opts if opt.metavar == "FILE"]
|
|
||||||
opts_dir = [opt for opt in opts if opt.metavar == "DIR"]
|
|
||||||
|
|
||||||
fileopts = []
|
|
||||||
for opt in opts_file:
|
|
||||||
if opt._short_opts:
|
|
||||||
fileopts.extend(opt._short_opts)
|
|
||||||
if opt._long_opts:
|
|
||||||
fileopts.extend(opt._long_opts)
|
|
||||||
|
|
||||||
diropts = []
|
|
||||||
for opt in opts_dir:
|
|
||||||
if opt._short_opts:
|
|
||||||
diropts.extend(opt._short_opts)
|
|
||||||
if opt._long_opts:
|
|
||||||
diropts.extend(opt._long_opts)
|
|
||||||
|
|
||||||
flags = [opt.get_opt_string() for opt in opts]
|
|
||||||
|
|
||||||
with open(ZSH_COMPLETION_TEMPLATE) as f:
|
|
||||||
template = f.read()
|
|
||||||
|
|
||||||
template = template.replace("{{fileopts}}", "|".join(fileopts))
|
|
||||||
template = template.replace("{{diropts}}", "|".join(diropts))
|
|
||||||
template = template.replace("{{flags}}", " ".join(flags))
|
|
||||||
|
|
||||||
with open(ZSH_COMPLETION_FILE, "w") as f:
|
|
||||||
f.write(template)
|
|
||||||
|
|
||||||
|
|
||||||
parser = youtube_dl.parseOpts()[0]
|
|
||||||
build_completion(parser)
|
|
1
docs/.gitignore
vendored
1
docs/.gitignore
vendored
|
@ -1 +0,0 @@
|
||||||
_build/
|
|
177
docs/Makefile
177
docs/Makefile
|
@ -1,177 +0,0 @@
|
||||||
# Makefile for Sphinx documentation
|
|
||||||
#
|
|
||||||
|
|
||||||
# You can set these variables from the command line.
|
|
||||||
SPHINXOPTS =
|
|
||||||
SPHINXBUILD = sphinx-build
|
|
||||||
PAPER =
|
|
||||||
BUILDDIR = _build
|
|
||||||
|
|
||||||
# User-friendly check for sphinx-build
|
|
||||||
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
|
|
||||||
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
|
|
||||||
endif
|
|
||||||
|
|
||||||
# Internal variables.
|
|
||||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
|
||||||
PAPEROPT_letter = -D latex_paper_size=letter
|
|
||||||
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
|
||||||
# the i18n builder cannot share the environment and doctrees with the others
|
|
||||||
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
|
||||||
|
|
||||||
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
|
|
||||||
|
|
||||||
help:
|
|
||||||
@echo "Please use \`make <target>' where <target> is one of"
|
|
||||||
@echo " html to make standalone HTML files"
|
|
||||||
@echo " dirhtml to make HTML files named index.html in directories"
|
|
||||||
@echo " singlehtml to make a single large HTML file"
|
|
||||||
@echo " pickle to make pickle files"
|
|
||||||
@echo " json to make JSON files"
|
|
||||||
@echo " htmlhelp to make HTML files and a HTML help project"
|
|
||||||
@echo " qthelp to make HTML files and a qthelp project"
|
|
||||||
@echo " devhelp to make HTML files and a Devhelp project"
|
|
||||||
@echo " epub to make an epub"
|
|
||||||
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
|
||||||
@echo " latexpdf to make LaTeX files and run them through pdflatex"
|
|
||||||
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
|
|
||||||
@echo " text to make text files"
|
|
||||||
@echo " man to make manual pages"
|
|
||||||
@echo " texinfo to make Texinfo files"
|
|
||||||
@echo " info to make Texinfo files and run them through makeinfo"
|
|
||||||
@echo " gettext to make PO message catalogs"
|
|
||||||
@echo " changes to make an overview of all changed/added/deprecated items"
|
|
||||||
@echo " xml to make Docutils-native XML files"
|
|
||||||
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
|
|
||||||
@echo " linkcheck to check all external links for integrity"
|
|
||||||
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
|
|
||||||
|
|
||||||
clean:
|
|
||||||
rm -rf $(BUILDDIR)/*
|
|
||||||
|
|
||||||
html:
|
|
||||||
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
|
||||||
|
|
||||||
dirhtml:
|
|
||||||
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
|
|
||||||
|
|
||||||
singlehtml:
|
|
||||||
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
|
|
||||||
|
|
||||||
pickle:
|
|
||||||
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
|
|
||||||
@echo
|
|
||||||
@echo "Build finished; now you can process the pickle files."
|
|
||||||
|
|
||||||
json:
|
|
||||||
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
|
|
||||||
@echo
|
|
||||||
@echo "Build finished; now you can process the JSON files."
|
|
||||||
|
|
||||||
htmlhelp:
|
|
||||||
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
|
|
||||||
@echo
|
|
||||||
@echo "Build finished; now you can run HTML Help Workshop with the" \
|
|
||||||
".hhp project file in $(BUILDDIR)/htmlhelp."
|
|
||||||
|
|
||||||
qthelp:
|
|
||||||
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
|
|
||||||
@echo
|
|
||||||
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
|
|
||||||
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
|
|
||||||
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/youtube-dl.qhcp"
|
|
||||||
@echo "To view the help file:"
|
|
||||||
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/youtube-dl.qhc"
|
|
||||||
|
|
||||||
devhelp:
|
|
||||||
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
|
|
||||||
@echo
|
|
||||||
@echo "Build finished."
|
|
||||||
@echo "To view the help file:"
|
|
||||||
@echo "# mkdir -p $$HOME/.local/share/devhelp/youtube-dl"
|
|
||||||
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/youtube-dl"
|
|
||||||
@echo "# devhelp"
|
|
||||||
|
|
||||||
epub:
|
|
||||||
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
|
|
||||||
|
|
||||||
latex:
|
|
||||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
|
||||||
@echo
|
|
||||||
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
|
|
||||||
@echo "Run \`make' in that directory to run these through (pdf)latex" \
|
|
||||||
"(use \`make latexpdf' here to do that automatically)."
|
|
||||||
|
|
||||||
latexpdf:
|
|
||||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
|
||||||
@echo "Running LaTeX files through pdflatex..."
|
|
||||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf
|
|
||||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
|
||||||
|
|
||||||
latexpdfja:
|
|
||||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
|
||||||
@echo "Running LaTeX files through platex and dvipdfmx..."
|
|
||||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
|
|
||||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
|
||||||
|
|
||||||
text:
|
|
||||||
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The text files are in $(BUILDDIR)/text."
|
|
||||||
|
|
||||||
man:
|
|
||||||
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
|
|
||||||
|
|
||||||
texinfo:
|
|
||||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
|
|
||||||
@echo "Run \`make' in that directory to run these through makeinfo" \
|
|
||||||
"(use \`make info' here to do that automatically)."
|
|
||||||
|
|
||||||
info:
|
|
||||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
|
||||||
@echo "Running Texinfo files through makeinfo..."
|
|
||||||
make -C $(BUILDDIR)/texinfo info
|
|
||||||
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
|
|
||||||
|
|
||||||
gettext:
|
|
||||||
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
|
|
||||||
|
|
||||||
changes:
|
|
||||||
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
|
|
||||||
@echo
|
|
||||||
@echo "The overview file is in $(BUILDDIR)/changes."
|
|
||||||
|
|
||||||
linkcheck:
|
|
||||||
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
|
|
||||||
@echo
|
|
||||||
@echo "Link check complete; look for any errors in the above output " \
|
|
||||||
"or in $(BUILDDIR)/linkcheck/output.txt."
|
|
||||||
|
|
||||||
doctest:
|
|
||||||
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
|
|
||||||
@echo "Testing of doctests in the sources finished, look at the " \
|
|
||||||
"results in $(BUILDDIR)/doctest/output.txt."
|
|
||||||
|
|
||||||
xml:
|
|
||||||
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
|
|
||||||
|
|
||||||
pseudoxml:
|
|
||||||
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
|
|
||||||
@echo
|
|
||||||
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
|
|
71
docs/conf.py
71
docs/conf.py
|
@ -1,71 +0,0 @@
|
||||||
# coding: utf-8
|
|
||||||
#
|
|
||||||
# youtube-dl documentation build configuration file, created by
|
|
||||||
# sphinx-quickstart on Fri Mar 14 21:05:43 2014.
|
|
||||||
#
|
|
||||||
# This file is execfile()d with the current directory set to its
|
|
||||||
# containing dir.
|
|
||||||
#
|
|
||||||
# Note that not all possible configuration values are present in this
|
|
||||||
# autogenerated file.
|
|
||||||
#
|
|
||||||
# All configuration values have a default; values that are commented out
|
|
||||||
# serve to show the default.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
# Allows to import youtube_dl
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
# -- General configuration ------------------------------------------------
|
|
||||||
|
|
||||||
# Add any Sphinx extension module names here, as strings. They can be
|
|
||||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
|
||||||
# ones.
|
|
||||||
extensions = [
|
|
||||||
'sphinx.ext.autodoc',
|
|
||||||
]
|
|
||||||
|
|
||||||
# Add any paths that contain templates here, relative to this directory.
|
|
||||||
templates_path = ['_templates']
|
|
||||||
|
|
||||||
# The suffix of source filenames.
|
|
||||||
source_suffix = '.rst'
|
|
||||||
|
|
||||||
# The master toctree document.
|
|
||||||
master_doc = 'index'
|
|
||||||
|
|
||||||
# General information about the project.
|
|
||||||
project = u'youtube-dl'
|
|
||||||
copyright = u'2014, Ricardo Garcia Gonzalez'
|
|
||||||
|
|
||||||
# The version info for the project you're documenting, acts as replacement for
|
|
||||||
# |version| and |release|, also used in various other places throughout the
|
|
||||||
# built documents.
|
|
||||||
#
|
|
||||||
# The short X.Y version.
|
|
||||||
from youtube_dl.version import __version__
|
|
||||||
version = __version__
|
|
||||||
# The full version, including alpha/beta/rc tags.
|
|
||||||
release = version
|
|
||||||
|
|
||||||
# List of patterns, relative to source directory, that match files and
|
|
||||||
# directories to ignore when looking for source files.
|
|
||||||
exclude_patterns = ['_build']
|
|
||||||
|
|
||||||
# The name of the Pygments (syntax highlighting) style to use.
|
|
||||||
pygments_style = 'sphinx'
|
|
||||||
|
|
||||||
# -- Options for HTML output ----------------------------------------------
|
|
||||||
|
|
||||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
|
||||||
# a list of builtin themes.
|
|
||||||
html_theme = 'default'
|
|
||||||
|
|
||||||
# Add any paths that contain custom static files (such as style sheets) here,
|
|
||||||
# relative to this directory. They are copied after the builtin static files,
|
|
||||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
|
||||||
html_static_path = ['_static']
|
|
||||||
|
|
||||||
# Output file base name for HTML help builder.
|
|
||||||
htmlhelp_basename = 'youtube-dldoc'
|
|
|
@ -1,23 +0,0 @@
|
||||||
Welcome to youtube-dl's documentation!
|
|
||||||
======================================
|
|
||||||
|
|
||||||
*youtube-dl* is a command-line program to download videos from YouTube.com and more sites.
|
|
||||||
It can also be used in Python code.
|
|
||||||
|
|
||||||
Developer guide
|
|
||||||
---------------
|
|
||||||
|
|
||||||
This section contains information for using *youtube-dl* from Python programs.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 2
|
|
||||||
|
|
||||||
module_guide
|
|
||||||
|
|
||||||
Indices and tables
|
|
||||||
==================
|
|
||||||
|
|
||||||
* :ref:`genindex`
|
|
||||||
* :ref:`modindex`
|
|
||||||
* :ref:`search`
|
|
||||||
|
|
|
@ -1,67 +0,0 @@
|
||||||
Using the ``youtube_dl`` module
|
|
||||||
===============================
|
|
||||||
|
|
||||||
When using the ``youtube_dl`` module, you start by creating an instance of :class:`YoutubeDL` and adding all the available extractors:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
>>> from youtube_dl import YoutubeDL
|
|
||||||
>>> ydl = YoutubeDL()
|
|
||||||
>>> ydl.add_default_info_extractors()
|
|
||||||
|
|
||||||
Extracting video information
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
You use the :meth:`YoutubeDL.extract_info` method for getting the video information, which returns a dictionary:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
>>> info = ydl.extract_info('http://www.youtube.com/watch?v=BaW_jenozKc', download=False)
|
|
||||||
[youtube] Setting language
|
|
||||||
[youtube] BaW_jenozKc: Downloading webpage
|
|
||||||
[youtube] BaW_jenozKc: Downloading video info webpage
|
|
||||||
[youtube] BaW_jenozKc: Extracting video information
|
|
||||||
>>> info['title']
|
|
||||||
'youtube-dl test video "\'/\\ä↭𝕐'
|
|
||||||
>>> info['height'], info['width']
|
|
||||||
(720, 1280)
|
|
||||||
|
|
||||||
If you want to download or play the video you can get its url:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
>>> info['url']
|
|
||||||
'https://...'
|
|
||||||
|
|
||||||
Extracting playlist information
|
|
||||||
-------------------------------
|
|
||||||
|
|
||||||
The playlist information is extracted in a similar way, but the dictionary is a bit different:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
>>> playlist = ydl.extract_info('http://www.ted.com/playlists/13/open_source_open_world', download=False)
|
|
||||||
[TED] open_source_open_world: Downloading playlist webpage
|
|
||||||
...
|
|
||||||
>>> playlist['title']
|
|
||||||
'Open-source, open world'
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
You can access the videos in the playlist with the ``entries`` field:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
>>> for video in playlist['entries']:
|
|
||||||
... print('Video #%d: %s' % (video['playlist_index'], video['title']))
|
|
||||||
|
|
||||||
Video #1: How Arduino is open-sourcing imagination
|
|
||||||
Video #2: The year open data went worldwide
|
|
||||||
Video #3: Massive-scale online collaboration
|
|
||||||
Video #4: The art of asking
|
|
||||||
Video #5: How cognitive surplus will change the world
|
|
||||||
Video #6: The birth of Wikipedia
|
|
||||||
Video #7: Coding a better government
|
|
||||||
Video #8: The era of open innovation
|
|
||||||
Video #9: The currency of the new economy is trust
|
|
||||||
|
|
File diff suppressed because it is too large
Load diff
5
downloads/.htaccess
Normal file
5
downloads/.htaccess
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
RewriteEngine On
|
||||||
|
|
||||||
|
RewriteRule ^$ https://github.com/ytdl-org/youtube-dl/releases
|
||||||
|
RewriteRule latest(.*) /downloads/2019.03.09$1 [L,R=302]
|
||||||
|
|
5
downloads/2016.06.03/.htaccess
Normal file
5
downloads/2016.06.03/.htaccess
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
RewriteEngine On
|
||||||
|
|
||||||
|
RewriteRule ^$ https://github.com/ytdl-org/youtube-dl/releases/tag/2016.06.03_tmp [R=302,L]
|
||||||
|
RewriteRule ^(.+)$ https://github.com/ytdl-org/youtube-dl/releases/download/2016.06.03_tmp/$1 [R=302,L]
|
||||||
|
|
36
index.php
Normal file
36
index.php
Normal file
|
@ -0,0 +1,36 @@
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<meta http-equiv="Content-type" content="text/html;charset=UTF-8">
|
||||||
|
<title>youtube-dl</title>
|
||||||
|
<link rel="stylesheet" href="style.css" type="text/css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
|
||||||
|
<h1>youtube-dl downloads</h1>
|
||||||
|
|
||||||
|
<?php
|
||||||
|
$latest = file_get_contents('latest_version');
|
||||||
|
|
||||||
|
echo '<div class="latest">';
|
||||||
|
echo '<div><a href="latest">Latest</a> (v' . htmlspecialchars($latest) . ') downloads:</div>';
|
||||||
|
echo '<a href="downloads/latest/youtube-dl">youtube-dl</a> ';
|
||||||
|
echo '<a href="downloads/latest/youtube-dl.exe">youtube-dl.exe</a> ';
|
||||||
|
echo '<a href="downloads/latest/youtube-dl-' . htmlspecialchars($latest) . '.tar.gz">youtube-dl-' . htmlspecialchars($latest) . '.tar.gz</a>';
|
||||||
|
echo '</div>';
|
||||||
|
|
||||||
|
?>
|
||||||
|
|
||||||
|
See the right for more resources.
|
||||||
|
|
||||||
|
<table border="0" id="rgb" style="float: right;">
|
||||||
|
<tr><td><a class="button" id="main-homepage" href="http://ytdl-org.github.io/youtube-dl/">Homepage</a></td></tr>
|
||||||
|
<tr><td><a class="button" id="g" href="http://ytdl-org.github.io/youtube-dl/download.html">Download instructions</a></td></tr>
|
||||||
|
<tr><td><a class="button" id="r" href="http://ytdl-org.github.io/youtube-dl/documentation.html">Documentation</a></td></tr>
|
||||||
|
<tr><td><a class="button" id="main-support" href="https://github.com/ytdl-org/youtube-dl/issues/">Support</a></td></tr>
|
||||||
|
<tr><td><a class="button" id="y" href="https://github.com/ytdl-org/youtube-dl/">Develop</a></td></tr>
|
||||||
|
<tr><td><a class="button" id="b" href="http://ytdl-org.github.io/youtube-dl/about.html">About</a></td></tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
</body>
|
||||||
|
</html>
|
3
ip.php
Normal file
3
ip.php
Normal file
|
@ -0,0 +1,3 @@
|
||||||
|
<?php
|
||||||
|
header('Content-Type: text/plain');
|
||||||
|
echo $_SERVER['REMOTE_ADDR'];
|
|
@ -1,6 +0,0 @@
|
||||||
[wheel]
|
|
||||||
universal = True
|
|
||||||
|
|
||||||
[flake8]
|
|
||||||
exclude = youtube_dl/extractor/__init__.py,devscripts/buildserver.py,devscripts/lazy_load_template.py,devscripts/make_issue_template.py,setup.py,build,.git,venv
|
|
||||||
ignore = E402,E501,E731,E741,W503
|
|
148
setup.py
148
setup.py
|
@ -1,148 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import print_function
|
|
||||||
|
|
||||||
import os.path
|
|
||||||
import warnings
|
|
||||||
import sys
|
|
||||||
|
|
||||||
try:
|
|
||||||
from setuptools import setup, Command
|
|
||||||
setuptools_available = True
|
|
||||||
except ImportError:
|
|
||||||
from distutils.core import setup, Command
|
|
||||||
setuptools_available = False
|
|
||||||
from distutils.spawn import spawn
|
|
||||||
|
|
||||||
try:
|
|
||||||
# This will create an exe that needs Microsoft Visual C++ 2008
|
|
||||||
# Redistributable Package
|
|
||||||
import py2exe
|
|
||||||
except ImportError:
|
|
||||||
if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
|
|
||||||
print('Cannot import py2exe', file=sys.stderr)
|
|
||||||
exit(1)
|
|
||||||
|
|
||||||
py2exe_options = {
|
|
||||||
'bundle_files': 1,
|
|
||||||
'compressed': 1,
|
|
||||||
'optimize': 2,
|
|
||||||
'dist_dir': '.',
|
|
||||||
'dll_excludes': ['w9xpopen.exe', 'crypt32.dll'],
|
|
||||||
}
|
|
||||||
|
|
||||||
# Get the version from youtube_dl/version.py without importing the package
|
|
||||||
exec(compile(open('youtube_dl/version.py').read(),
|
|
||||||
'youtube_dl/version.py', 'exec'))
|
|
||||||
|
|
||||||
DESCRIPTION = 'YouTube video downloader'
|
|
||||||
LONG_DESCRIPTION = 'Command-line program to download videos from YouTube.com and other video sites'
|
|
||||||
|
|
||||||
py2exe_console = [{
|
|
||||||
'script': './youtube_dl/__main__.py',
|
|
||||||
'dest_base': 'youtube-dl',
|
|
||||||
'version': __version__,
|
|
||||||
'description': DESCRIPTION,
|
|
||||||
'comments': LONG_DESCRIPTION,
|
|
||||||
'product_name': 'youtube-dl',
|
|
||||||
'product_version': __version__,
|
|
||||||
}]
|
|
||||||
|
|
||||||
py2exe_params = {
|
|
||||||
'console': py2exe_console,
|
|
||||||
'options': {'py2exe': py2exe_options},
|
|
||||||
'zipfile': None
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
|
|
||||||
params = py2exe_params
|
|
||||||
else:
|
|
||||||
files_spec = [
|
|
||||||
('etc/bash_completion.d', ['youtube-dl.bash-completion']),
|
|
||||||
('etc/fish/completions', ['youtube-dl.fish']),
|
|
||||||
('share/doc/youtube_dl', ['README.txt']),
|
|
||||||
('share/man/man1', ['youtube-dl.1'])
|
|
||||||
]
|
|
||||||
root = os.path.dirname(os.path.abspath(__file__))
|
|
||||||
data_files = []
|
|
||||||
for dirname, files in files_spec:
|
|
||||||
resfiles = []
|
|
||||||
for fn in files:
|
|
||||||
if not os.path.exists(fn):
|
|
||||||
warnings.warn('Skipping file %s since it is not present. Type make to build all automatically generated files.' % fn)
|
|
||||||
else:
|
|
||||||
resfiles.append(fn)
|
|
||||||
data_files.append((dirname, resfiles))
|
|
||||||
|
|
||||||
params = {
|
|
||||||
'data_files': data_files,
|
|
||||||
}
|
|
||||||
if setuptools_available:
|
|
||||||
params['entry_points'] = {'console_scripts': ['youtube-dl = youtube_dl:main']}
|
|
||||||
else:
|
|
||||||
params['scripts'] = ['bin/youtube-dl']
|
|
||||||
|
|
||||||
class build_lazy_extractors(Command):
|
|
||||||
description = 'Build the extractor lazy loading module'
|
|
||||||
user_options = []
|
|
||||||
|
|
||||||
def initialize_options(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def finalize_options(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
spawn(
|
|
||||||
[sys.executable, 'devscripts/make_lazy_extractors.py', 'youtube_dl/extractor/lazy_extractors.py'],
|
|
||||||
dry_run=self.dry_run,
|
|
||||||
)
|
|
||||||
|
|
||||||
setup(
|
|
||||||
name='youtube_dl',
|
|
||||||
version=__version__,
|
|
||||||
description=DESCRIPTION,
|
|
||||||
long_description=LONG_DESCRIPTION,
|
|
||||||
url='https://github.com/ytdl-org/youtube-dl',
|
|
||||||
author='Ricardo Garcia',
|
|
||||||
author_email='ytdl@yt-dl.org',
|
|
||||||
maintainer='Sergey M.',
|
|
||||||
maintainer_email='dstftw@gmail.com',
|
|
||||||
license='Unlicense',
|
|
||||||
packages=[
|
|
||||||
'youtube_dl',
|
|
||||||
'youtube_dl.extractor', 'youtube_dl.downloader',
|
|
||||||
'youtube_dl.postprocessor'],
|
|
||||||
|
|
||||||
# Provokes warning on most systems (why?!)
|
|
||||||
# test_suite = 'nose.collector',
|
|
||||||
# test_requires = ['nosetest'],
|
|
||||||
|
|
||||||
classifiers=[
|
|
||||||
'Topic :: Multimedia :: Video',
|
|
||||||
'Development Status :: 5 - Production/Stable',
|
|
||||||
'Environment :: Console',
|
|
||||||
'License :: Public Domain',
|
|
||||||
'Programming Language :: Python',
|
|
||||||
'Programming Language :: Python :: 2',
|
|
||||||
'Programming Language :: Python :: 2.6',
|
|
||||||
'Programming Language :: Python :: 2.7',
|
|
||||||
'Programming Language :: Python :: 3',
|
|
||||||
'Programming Language :: Python :: 3.2',
|
|
||||||
'Programming Language :: Python :: 3.3',
|
|
||||||
'Programming Language :: Python :: 3.4',
|
|
||||||
'Programming Language :: Python :: 3.5',
|
|
||||||
'Programming Language :: Python :: 3.6',
|
|
||||||
'Programming Language :: Python :: 3.7',
|
|
||||||
'Programming Language :: Python :: 3.8',
|
|
||||||
'Programming Language :: Python :: Implementation',
|
|
||||||
'Programming Language :: Python :: Implementation :: CPython',
|
|
||||||
'Programming Language :: Python :: Implementation :: IronPython',
|
|
||||||
'Programming Language :: Python :: Implementation :: Jython',
|
|
||||||
'Programming Language :: Python :: Implementation :: PyPy',
|
|
||||||
],
|
|
||||||
|
|
||||||
cmdclass={'build_lazy_extractors': build_lazy_extractors},
|
|
||||||
**params
|
|
||||||
)
|
|
152
style.css
Normal file
152
style.css
Normal file
|
@ -0,0 +1,152 @@
|
||||||
|
body {
|
||||||
|
font-family: sans-serif;
|
||||||
|
margin-left: 10%;
|
||||||
|
margin-right: 10%;
|
||||||
|
margin-top: 2ex;
|
||||||
|
margin-bottom: 3ex;
|
||||||
|
background-color: white;
|
||||||
|
color: black;
|
||||||
|
/*background-color: #fff1db;*/
|
||||||
|
background-color: white;
|
||||||
|
/*
|
||||||
|
background-image: url("gradient.png");
|
||||||
|
background-repeat: repeat-x;
|
||||||
|
*/
|
||||||
|
/*
|
||||||
|
background-image: url("gradient2.png");
|
||||||
|
background-repeat: repeat-y;
|
||||||
|
*/
|
||||||
|
/*
|
||||||
|
background-image: url("gradient3.png");
|
||||||
|
background-repeat: repeat-x;
|
||||||
|
*/
|
||||||
|
/*
|
||||||
|
background-image: url("gradient4.png");
|
||||||
|
background-repeat: repeat-y;
|
||||||
|
*/
|
||||||
|
background-image: url("gradient5.png");
|
||||||
|
background-repeat: repeat-x;
|
||||||
|
}
|
||||||
|
.heading {
|
||||||
|
border: 0;
|
||||||
|
color: black;
|
||||||
|
font-size: xx-large;
|
||||||
|
font-weight: bold;
|
||||||
|
padding-bottom: 1ex;
|
||||||
|
border-bottom: 1px solid black;
|
||||||
|
margin-bottom: 2ex;
|
||||||
|
width: 100%;
|
||||||
|
}
|
||||||
|
.heading tr {
|
||||||
|
border: 0;
|
||||||
|
}
|
||||||
|
.heading td {
|
||||||
|
border: 0;
|
||||||
|
}
|
||||||
|
.heading a {
|
||||||
|
text-decoration: none;
|
||||||
|
color: black;
|
||||||
|
}
|
||||||
|
.title {
|
||||||
|
text-align: left;
|
||||||
|
}
|
||||||
|
.subtitle {
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
.toc {
|
||||||
|
padding-left: 2ex;
|
||||||
|
border: 1px solid #aaaaaa;
|
||||||
|
background-color: white;
|
||||||
|
padding-bottom: 1ex;
|
||||||
|
border-radius: 10px;
|
||||||
|
-moz-border-radius: 10px;
|
||||||
|
}
|
||||||
|
.toc ul {
|
||||||
|
margin: 0; list-style-type: none;
|
||||||
|
}
|
||||||
|
hr {
|
||||||
|
margin-top: 3ex;
|
||||||
|
margin-bottom: 3ex;
|
||||||
|
width: 50%;
|
||||||
|
}
|
||||||
|
.note {
|
||||||
|
margin-top: 10ex;
|
||||||
|
text-align: center;
|
||||||
|
font-size: x-small;
|
||||||
|
clear: both;
|
||||||
|
}
|
||||||
|
h1 {
|
||||||
|
font-size: x-large;
|
||||||
|
margin-top: 2ex;
|
||||||
|
color: black;
|
||||||
|
margin-left: 2%;
|
||||||
|
margin-right: 2%;
|
||||||
|
}
|
||||||
|
h2 {
|
||||||
|
font-size: large;
|
||||||
|
margin-left: 5%;
|
||||||
|
margin-right: 5%;
|
||||||
|
}
|
||||||
|
p {
|
||||||
|
margin-left: 5%;
|
||||||
|
margin-right: 5%;
|
||||||
|
}
|
||||||
|
ul {
|
||||||
|
margin-left: 5%;
|
||||||
|
margin-right: 5%;
|
||||||
|
}
|
||||||
|
li {
|
||||||
|
margin-left: 3%;
|
||||||
|
margin-top: 0.5ex;
|
||||||
|
margin-bottom: 0.5ex;
|
||||||
|
}
|
||||||
|
tt {
|
||||||
|
padding-left: 0.5ex;
|
||||||
|
padding-right: 0.5ex;
|
||||||
|
background: #dddddd;
|
||||||
|
}
|
||||||
|
#rgb {
|
||||||
|
width: 33%;
|
||||||
|
margin: 3ex auto;
|
||||||
|
}
|
||||||
|
.button {
|
||||||
|
color: white;
|
||||||
|
font-weight: bold;
|
||||||
|
font-size: x-large;
|
||||||
|
text-decoration: none;
|
||||||
|
text-align: center;
|
||||||
|
display: block;
|
||||||
|
padding: 2ex;
|
||||||
|
border-radius: 10px;
|
||||||
|
-moz-border-radius: 10px;
|
||||||
|
}
|
||||||
|
#r {
|
||||||
|
background-color: #884444;
|
||||||
|
border: 2px solid #880000;
|
||||||
|
}
|
||||||
|
#g {
|
||||||
|
background-color: #448844;
|
||||||
|
border: 2px solid #006600;
|
||||||
|
}
|
||||||
|
#b {
|
||||||
|
background-color: #444488;
|
||||||
|
border: 2px solid #000088;
|
||||||
|
}
|
||||||
|
#y {
|
||||||
|
background-color: #888844;
|
||||||
|
border: 2px solid #666600;
|
||||||
|
}
|
||||||
|
|
||||||
|
#main-homepage {
|
||||||
|
background-color: #848;
|
||||||
|
border: 2px solid #808;
|
||||||
|
|
||||||
|
}
|
||||||
|
#main-support {
|
||||||
|
background-color: #448888;
|
||||||
|
border: 2px solid #008888;
|
||||||
|
}
|
||||||
|
|
||||||
|
.all-versions {
|
||||||
|
float: left;
|
||||||
|
}
|
282
test/helper.py
282
test/helper.py
|
@ -1,282 +0,0 @@
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import errno
|
|
||||||
import io
|
|
||||||
import hashlib
|
|
||||||
import json
|
|
||||||
import os.path
|
|
||||||
import re
|
|
||||||
import types
|
|
||||||
import ssl
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import youtube_dl.extractor
|
|
||||||
from youtube_dl import YoutubeDL
|
|
||||||
from youtube_dl.compat import (
|
|
||||||
compat_os_name,
|
|
||||||
compat_str,
|
|
||||||
)
|
|
||||||
from youtube_dl.utils import (
|
|
||||||
preferredencoding,
|
|
||||||
write_string,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_params(override=None):
|
|
||||||
PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
|
|
||||||
"parameters.json")
|
|
||||||
LOCAL_PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
|
|
||||||
"local_parameters.json")
|
|
||||||
with io.open(PARAMETERS_FILE, encoding='utf-8') as pf:
|
|
||||||
parameters = json.load(pf)
|
|
||||||
if os.path.exists(LOCAL_PARAMETERS_FILE):
|
|
||||||
with io.open(LOCAL_PARAMETERS_FILE, encoding='utf-8') as pf:
|
|
||||||
parameters.update(json.load(pf))
|
|
||||||
if override:
|
|
||||||
parameters.update(override)
|
|
||||||
return parameters
|
|
||||||
|
|
||||||
|
|
||||||
def try_rm(filename):
|
|
||||||
""" Remove a file if it exists """
|
|
||||||
try:
|
|
||||||
os.remove(filename)
|
|
||||||
except OSError as ose:
|
|
||||||
if ose.errno != errno.ENOENT:
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
def report_warning(message):
|
|
||||||
'''
|
|
||||||
Print the message to stderr, it will be prefixed with 'WARNING:'
|
|
||||||
If stderr is a tty file the 'WARNING:' will be colored
|
|
||||||
'''
|
|
||||||
if sys.stderr.isatty() and compat_os_name != 'nt':
|
|
||||||
_msg_header = '\033[0;33mWARNING:\033[0m'
|
|
||||||
else:
|
|
||||||
_msg_header = 'WARNING:'
|
|
||||||
output = '%s %s\n' % (_msg_header, message)
|
|
||||||
if 'b' in getattr(sys.stderr, 'mode', '') or sys.version_info[0] < 3:
|
|
||||||
output = output.encode(preferredencoding())
|
|
||||||
sys.stderr.write(output)
|
|
||||||
|
|
||||||
|
|
||||||
class FakeYDL(YoutubeDL):
|
|
||||||
def __init__(self, override=None):
|
|
||||||
# Different instances of the downloader can't share the same dictionary
|
|
||||||
# some test set the "sublang" parameter, which would break the md5 checks.
|
|
||||||
params = get_params(override=override)
|
|
||||||
super(FakeYDL, self).__init__(params, auto_init=False)
|
|
||||||
self.result = []
|
|
||||||
|
|
||||||
def to_screen(self, s, skip_eol=None):
|
|
||||||
print(s)
|
|
||||||
|
|
||||||
def trouble(self, s, tb=None):
|
|
||||||
raise Exception(s)
|
|
||||||
|
|
||||||
def download(self, x):
|
|
||||||
self.result.append(x)
|
|
||||||
|
|
||||||
def expect_warning(self, regex):
|
|
||||||
# Silence an expected warning matching a regex
|
|
||||||
old_report_warning = self.report_warning
|
|
||||||
|
|
||||||
def report_warning(self, message):
|
|
||||||
if re.match(regex, message):
|
|
||||||
return
|
|
||||||
old_report_warning(message)
|
|
||||||
self.report_warning = types.MethodType(report_warning, self)
|
|
||||||
|
|
||||||
|
|
||||||
def gettestcases(include_onlymatching=False):
|
|
||||||
for ie in youtube_dl.extractor.gen_extractors():
|
|
||||||
for tc in ie.get_testcases(include_onlymatching):
|
|
||||||
yield tc
|
|
||||||
|
|
||||||
|
|
||||||
md5 = lambda s: hashlib.md5(s.encode('utf-8')).hexdigest()
|
|
||||||
|
|
||||||
|
|
||||||
def expect_value(self, got, expected, field):
|
|
||||||
if isinstance(expected, compat_str) and expected.startswith('re:'):
|
|
||||||
match_str = expected[len('re:'):]
|
|
||||||
match_rex = re.compile(match_str)
|
|
||||||
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, compat_str),
|
|
||||||
'Expected a %s object, but got %s for field %s' % (
|
|
||||||
compat_str.__name__, type(got).__name__, field))
|
|
||||||
self.assertTrue(
|
|
||||||
match_rex.match(got),
|
|
||||||
'field %s (value: %r) should match %r' % (field, got, match_str))
|
|
||||||
elif isinstance(expected, compat_str) and expected.startswith('startswith:'):
|
|
||||||
start_str = expected[len('startswith:'):]
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, compat_str),
|
|
||||||
'Expected a %s object, but got %s for field %s' % (
|
|
||||||
compat_str.__name__, type(got).__name__, field))
|
|
||||||
self.assertTrue(
|
|
||||||
got.startswith(start_str),
|
|
||||||
'field %s (value: %r) should start with %r' % (field, got, start_str))
|
|
||||||
elif isinstance(expected, compat_str) and expected.startswith('contains:'):
|
|
||||||
contains_str = expected[len('contains:'):]
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, compat_str),
|
|
||||||
'Expected a %s object, but got %s for field %s' % (
|
|
||||||
compat_str.__name__, type(got).__name__, field))
|
|
||||||
self.assertTrue(
|
|
||||||
contains_str in got,
|
|
||||||
'field %s (value: %r) should contain %r' % (field, got, contains_str))
|
|
||||||
elif isinstance(expected, type):
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, expected),
|
|
||||||
'Expected type %r for field %s, but got value %r of type %r' % (expected, field, got, type(got)))
|
|
||||||
elif isinstance(expected, dict) and isinstance(got, dict):
|
|
||||||
expect_dict(self, got, expected)
|
|
||||||
elif isinstance(expected, list) and isinstance(got, list):
|
|
||||||
self.assertEqual(
|
|
||||||
len(expected), len(got),
|
|
||||||
'Expect a list of length %d, but got a list of length %d for field %s' % (
|
|
||||||
len(expected), len(got), field))
|
|
||||||
for index, (item_got, item_expected) in enumerate(zip(got, expected)):
|
|
||||||
type_got = type(item_got)
|
|
||||||
type_expected = type(item_expected)
|
|
||||||
self.assertEqual(
|
|
||||||
type_expected, type_got,
|
|
||||||
'Type mismatch for list item at index %d for field %s, expected %r, got %r' % (
|
|
||||||
index, field, type_expected, type_got))
|
|
||||||
expect_value(self, item_got, item_expected, field)
|
|
||||||
else:
|
|
||||||
if isinstance(expected, compat_str) and expected.startswith('md5:'):
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, compat_str),
|
|
||||||
'Expected field %s to be a unicode object, but got value %r of type %r' % (field, got, type(got)))
|
|
||||||
got = 'md5:' + md5(got)
|
|
||||||
elif isinstance(expected, compat_str) and re.match(r'^(?:min|max)?count:\d+', expected):
|
|
||||||
self.assertTrue(
|
|
||||||
isinstance(got, (list, dict)),
|
|
||||||
'Expected field %s to be a list or a dict, but it is of type %s' % (
|
|
||||||
field, type(got).__name__))
|
|
||||||
op, _, expected_num = expected.partition(':')
|
|
||||||
expected_num = int(expected_num)
|
|
||||||
if op == 'mincount':
|
|
||||||
assert_func = assertGreaterEqual
|
|
||||||
msg_tmpl = 'Expected %d items in field %s, but only got %d'
|
|
||||||
elif op == 'maxcount':
|
|
||||||
assert_func = assertLessEqual
|
|
||||||
msg_tmpl = 'Expected maximum %d items in field %s, but got %d'
|
|
||||||
elif op == 'count':
|
|
||||||
assert_func = assertEqual
|
|
||||||
msg_tmpl = 'Expected exactly %d items in field %s, but got %d'
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
assert_func(
|
|
||||||
self, len(got), expected_num,
|
|
||||||
msg_tmpl % (expected_num, field, len(got)))
|
|
||||||
return
|
|
||||||
self.assertEqual(
|
|
||||||
expected, got,
|
|
||||||
'Invalid value for field %s, expected %r, got %r' % (field, expected, got))
|
|
||||||
|
|
||||||
|
|
||||||
def expect_dict(self, got_dict, expected_dict):
|
|
||||||
for info_field, expected in expected_dict.items():
|
|
||||||
got = got_dict.get(info_field)
|
|
||||||
expect_value(self, got, expected, info_field)
|
|
||||||
|
|
||||||
|
|
||||||
def expect_info_dict(self, got_dict, expected_dict):
|
|
||||||
expect_dict(self, got_dict, expected_dict)
|
|
||||||
# Check for the presence of mandatory fields
|
|
||||||
if got_dict.get('_type') not in ('playlist', 'multi_video'):
|
|
||||||
for key in ('id', 'url', 'title', 'ext'):
|
|
||||||
self.assertTrue(got_dict.get(key), 'Missing mandatory field %s' % key)
|
|
||||||
# Check for mandatory fields that are automatically set by YoutubeDL
|
|
||||||
for key in ['webpage_url', 'extractor', 'extractor_key']:
|
|
||||||
self.assertTrue(got_dict.get(key), 'Missing field: %s' % key)
|
|
||||||
|
|
||||||
# Are checkable fields missing from the test case definition?
|
|
||||||
test_info_dict = dict((key, value if not isinstance(value, compat_str) or len(value) < 250 else 'md5:' + md5(value))
|
|
||||||
for key, value in got_dict.items()
|
|
||||||
if value and key in ('id', 'title', 'description', 'uploader', 'upload_date', 'timestamp', 'uploader_id', 'location', 'age_limit'))
|
|
||||||
missing_keys = set(test_info_dict.keys()) - set(expected_dict.keys())
|
|
||||||
if missing_keys:
|
|
||||||
def _repr(v):
|
|
||||||
if isinstance(v, compat_str):
|
|
||||||
return "'%s'" % v.replace('\\', '\\\\').replace("'", "\\'").replace('\n', '\\n')
|
|
||||||
else:
|
|
||||||
return repr(v)
|
|
||||||
info_dict_str = ''
|
|
||||||
if len(missing_keys) != len(expected_dict):
|
|
||||||
info_dict_str += ''.join(
|
|
||||||
' %s: %s,\n' % (_repr(k), _repr(v))
|
|
||||||
for k, v in test_info_dict.items() if k not in missing_keys)
|
|
||||||
|
|
||||||
if info_dict_str:
|
|
||||||
info_dict_str += '\n'
|
|
||||||
info_dict_str += ''.join(
|
|
||||||
' %s: %s,\n' % (_repr(k), _repr(test_info_dict[k]))
|
|
||||||
for k in missing_keys)
|
|
||||||
write_string(
|
|
||||||
'\n\'info_dict\': {\n' + info_dict_str + '},\n', out=sys.stderr)
|
|
||||||
self.assertFalse(
|
|
||||||
missing_keys,
|
|
||||||
'Missing keys in test definition: %s' % (
|
|
||||||
', '.join(sorted(missing_keys))))
|
|
||||||
|
|
||||||
|
|
||||||
def assertRegexpMatches(self, text, regexp, msg=None):
|
|
||||||
if hasattr(self, 'assertRegexp'):
|
|
||||||
return self.assertRegexp(text, regexp, msg)
|
|
||||||
else:
|
|
||||||
m = re.match(regexp, text)
|
|
||||||
if not m:
|
|
||||||
note = 'Regexp didn\'t match: %r not found' % (regexp)
|
|
||||||
if len(text) < 1000:
|
|
||||||
note += ' in %r' % text
|
|
||||||
if msg is None:
|
|
||||||
msg = note
|
|
||||||
else:
|
|
||||||
msg = note + ', ' + msg
|
|
||||||
self.assertTrue(m, msg)
|
|
||||||
|
|
||||||
|
|
||||||
def assertGreaterEqual(self, got, expected, msg=None):
|
|
||||||
if not (got >= expected):
|
|
||||||
if msg is None:
|
|
||||||
msg = '%r not greater than or equal to %r' % (got, expected)
|
|
||||||
self.assertTrue(got >= expected, msg)
|
|
||||||
|
|
||||||
|
|
||||||
def assertLessEqual(self, got, expected, msg=None):
|
|
||||||
if not (got <= expected):
|
|
||||||
if msg is None:
|
|
||||||
msg = '%r not less than or equal to %r' % (got, expected)
|
|
||||||
self.assertTrue(got <= expected, msg)
|
|
||||||
|
|
||||||
|
|
||||||
def assertEqual(self, got, expected, msg=None):
|
|
||||||
if not (got == expected):
|
|
||||||
if msg is None:
|
|
||||||
msg = '%r not equal to %r' % (got, expected)
|
|
||||||
self.assertTrue(got == expected, msg)
|
|
||||||
|
|
||||||
|
|
||||||
def expect_warnings(ydl, warnings_re):
|
|
||||||
real_warning = ydl.report_warning
|
|
||||||
|
|
||||||
def _report_warning(w):
|
|
||||||
if not any(re.search(w_re, w) for w_re in warnings_re):
|
|
||||||
real_warning(w)
|
|
||||||
|
|
||||||
ydl.report_warning = _report_warning
|
|
||||||
|
|
||||||
|
|
||||||
def http_server_port(httpd):
|
|
||||||
if os.name == 'java' and isinstance(httpd.socket, ssl.SSLSocket):
|
|
||||||
# In Jython SSLSocket is not a subclass of socket.socket
|
|
||||||
sock = httpd.socket.sock
|
|
||||||
else:
|
|
||||||
sock = httpd.socket
|
|
||||||
return sock.getsockname()[1]
|
|
|
@ -1,43 +0,0 @@
|
||||||
{
|
|
||||||
"consoletitle": false,
|
|
||||||
"continuedl": true,
|
|
||||||
"forcedescription": false,
|
|
||||||
"forcefilename": false,
|
|
||||||
"forceformat": false,
|
|
||||||
"forcethumbnail": false,
|
|
||||||
"forcetitle": false,
|
|
||||||
"forceurl": false,
|
|
||||||
"format": "best",
|
|
||||||
"ignoreerrors": false,
|
|
||||||
"listformats": null,
|
|
||||||
"logtostderr": false,
|
|
||||||
"matchtitle": null,
|
|
||||||
"max_downloads": null,
|
|
||||||
"nooverwrites": false,
|
|
||||||
"nopart": false,
|
|
||||||
"noprogress": false,
|
|
||||||
"outtmpl": "%(id)s.%(ext)s",
|
|
||||||
"password": null,
|
|
||||||
"playlistend": -1,
|
|
||||||
"playliststart": 1,
|
|
||||||
"prefer_free_formats": false,
|
|
||||||
"quiet": false,
|
|
||||||
"ratelimit": null,
|
|
||||||
"rejecttitle": null,
|
|
||||||
"retries": 10,
|
|
||||||
"simulate": false,
|
|
||||||
"subtitleslang": null,
|
|
||||||
"subtitlesformat": "best",
|
|
||||||
"test": true,
|
|
||||||
"updatetime": true,
|
|
||||||
"usenetrc": false,
|
|
||||||
"username": null,
|
|
||||||
"verbose": true,
|
|
||||||
"writedescription": false,
|
|
||||||
"writeinfojson": true,
|
|
||||||
"writesubtitles": false,
|
|
||||||
"allsubtitles": false,
|
|
||||||
"listssubtitles": false,
|
|
||||||
"socket_timeout": 20,
|
|
||||||
"fixup": "never"
|
|
||||||
}
|
|
1
test/swftests/.gitignore
vendored
1
test/swftests/.gitignore
vendored
|
@ -1 +0,0 @@
|
||||||
*.swf
|
|
|
@ -1,19 +0,0 @@
|
||||||
// input: [["a", "b", "c", "d"]]
|
|
||||||
// output: ["c", "b", "a", "d"]
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class ArrayAccess {
|
|
||||||
public static function main(ar:Array):Array {
|
|
||||||
var aa:ArrayAccess = new ArrayAccess();
|
|
||||||
return aa.f(ar, 2);
|
|
||||||
}
|
|
||||||
|
|
||||||
private function f(ar:Array, num:Number):Array{
|
|
||||||
var x:String = ar[0];
|
|
||||||
var y:String = ar[num % ar.length];
|
|
||||||
ar[0] = y;
|
|
||||||
ar[num] = x;
|
|
||||||
return ar;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,17 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: 121
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class ClassCall {
|
|
||||||
public static function main():int{
|
|
||||||
var f:OtherClass = new OtherClass();
|
|
||||||
return f.func(100,20);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
class OtherClass {
|
|
||||||
public function func(x: int, y: int):int {
|
|
||||||
return x+y+1;
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,15 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: 0
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class ClassConstruction {
|
|
||||||
public static function main():int{
|
|
||||||
var f:Foo = new Foo();
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
class Foo {
|
|
||||||
|
|
||||||
}
|
|
|
@ -1,18 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: 4
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class ConstArrayAccess {
|
|
||||||
private static const x:int = 2;
|
|
||||||
private static const ar:Array = ["42", "3411"];
|
|
||||||
|
|
||||||
public static function main():int{
|
|
||||||
var c:ConstArrayAccess = new ConstArrayAccess();
|
|
||||||
return c.f();
|
|
||||||
}
|
|
||||||
|
|
||||||
public function f(): int {
|
|
||||||
return ar[1].length;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,12 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: 2
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class ConstantInt {
|
|
||||||
private static const x:int = 2;
|
|
||||||
|
|
||||||
public static function main():int{
|
|
||||||
return x;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,10 +0,0 @@
|
||||||
// input: [{"x": 1, "y": 2}]
|
|
||||||
// output: 3
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class DictCall {
|
|
||||||
public static function main(d:Object):int{
|
|
||||||
return d.x + d.y;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,10 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: false
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class EqualsOperator {
|
|
||||||
public static function main():Boolean{
|
|
||||||
return 1 == 2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,13 +0,0 @@
|
||||||
// input: [1, 2]
|
|
||||||
// output: 3
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class LocalVars {
|
|
||||||
public static function main(a:int, b:int):int{
|
|
||||||
var c:int = a + b + b;
|
|
||||||
var d:int = c - b;
|
|
||||||
var e:int = d;
|
|
||||||
return e;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,22 +0,0 @@
|
||||||
// input: [1]
|
|
||||||
// output: 2
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class MemberAssignment {
|
|
||||||
public var v:int;
|
|
||||||
|
|
||||||
public function g():int {
|
|
||||||
return this.v;
|
|
||||||
}
|
|
||||||
|
|
||||||
public function f(a:int):int{
|
|
||||||
this.v = a;
|
|
||||||
return this.v + this.g();
|
|
||||||
}
|
|
||||||
|
|
||||||
public static function main(a:int): int {
|
|
||||||
var v:MemberAssignment = new MemberAssignment();
|
|
||||||
return v.f(a);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,24 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: 123
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class NeOperator {
|
|
||||||
public static function main(): int {
|
|
||||||
var res:int = 0;
|
|
||||||
if (1 != 2) {
|
|
||||||
res += 3;
|
|
||||||
} else {
|
|
||||||
res += 4;
|
|
||||||
}
|
|
||||||
if (2 != 2) {
|
|
||||||
res += 10;
|
|
||||||
} else {
|
|
||||||
res += 20;
|
|
||||||
}
|
|
||||||
if (9 == 9) {
|
|
||||||
res += 100;
|
|
||||||
}
|
|
||||||
return res;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,21 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: 9
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class PrivateCall {
|
|
||||||
public static function main():int{
|
|
||||||
var f:OtherClass = new OtherClass();
|
|
||||||
return f.func();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
class OtherClass {
|
|
||||||
private function pf():int {
|
|
||||||
return 9;
|
|
||||||
}
|
|
||||||
|
|
||||||
public function func():int {
|
|
||||||
return this.pf();
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,22 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: 9
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class PrivateVoidCall {
|
|
||||||
public static function main():int{
|
|
||||||
var f:OtherClass = new OtherClass();
|
|
||||||
f.func();
|
|
||||||
return 9;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
class OtherClass {
|
|
||||||
private function pf():void {
|
|
||||||
;
|
|
||||||
}
|
|
||||||
|
|
||||||
public function func():void {
|
|
||||||
this.pf();
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,13 +0,0 @@
|
||||||
// input: [1]
|
|
||||||
// output: 1
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class StaticAssignment {
|
|
||||||
public static var v:int;
|
|
||||||
|
|
||||||
public static function main(a:int):int{
|
|
||||||
v = a;
|
|
||||||
return v;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,16 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: 1
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class StaticRetrieval {
|
|
||||||
public static var v:int;
|
|
||||||
|
|
||||||
public static function main():int{
|
|
||||||
if (v) {
|
|
||||||
return 0;
|
|
||||||
} else {
|
|
||||||
return 1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,11 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: 3
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class StringBasics {
|
|
||||||
public static function main():int{
|
|
||||||
var s:String = "abc";
|
|
||||||
return s.length;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,11 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: 9897
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class StringCharCodeAt {
|
|
||||||
public static function main():int{
|
|
||||||
var s:String = "abc";
|
|
||||||
return s.charCodeAt(1) * 100 + s.charCodeAt();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,11 +0,0 @@
|
||||||
// input: []
|
|
||||||
// output: 2
|
|
||||||
|
|
||||||
package {
|
|
||||||
public class StringConversion {
|
|
||||||
public static function main():int{
|
|
||||||
var s:String = String(99);
|
|
||||||
return s.length;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
File diff suppressed because it is too large
Load diff
|
@ -1,924 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
import copy
|
|
||||||
|
|
||||||
from test.helper import FakeYDL, assertRegexpMatches
|
|
||||||
from youtube_dl import YoutubeDL
|
|
||||||
from youtube_dl.compat import compat_str, compat_urllib_error
|
|
||||||
from youtube_dl.extractor import YoutubeIE
|
|
||||||
from youtube_dl.extractor.common import InfoExtractor
|
|
||||||
from youtube_dl.postprocessor.common import PostProcessor
|
|
||||||
from youtube_dl.utils import ExtractorError, match_filter_func
|
|
||||||
|
|
||||||
TEST_URL = 'http://localhost/sample.mp4'
|
|
||||||
|
|
||||||
|
|
||||||
class YDL(FakeYDL):
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
super(YDL, self).__init__(*args, **kwargs)
|
|
||||||
self.downloaded_info_dicts = []
|
|
||||||
self.msgs = []
|
|
||||||
|
|
||||||
def process_info(self, info_dict):
|
|
||||||
self.downloaded_info_dicts.append(info_dict)
|
|
||||||
|
|
||||||
def to_screen(self, msg):
|
|
||||||
self.msgs.append(msg)
|
|
||||||
|
|
||||||
|
|
||||||
def _make_result(formats, **kwargs):
|
|
||||||
res = {
|
|
||||||
'formats': formats,
|
|
||||||
'id': 'testid',
|
|
||||||
'title': 'testttitle',
|
|
||||||
'extractor': 'testex',
|
|
||||||
'extractor_key': 'TestEx',
|
|
||||||
}
|
|
||||||
res.update(**kwargs)
|
|
||||||
return res
|
|
||||||
|
|
||||||
|
|
||||||
class TestFormatSelection(unittest.TestCase):
|
|
||||||
def test_prefer_free_formats(self):
|
|
||||||
# Same resolution => download webm
|
|
||||||
ydl = YDL()
|
|
||||||
ydl.params['prefer_free_formats'] = True
|
|
||||||
formats = [
|
|
||||||
{'ext': 'webm', 'height': 460, 'url': TEST_URL},
|
|
||||||
{'ext': 'mp4', 'height': 460, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['ext'], 'webm')
|
|
||||||
|
|
||||||
# Different resolution => download best quality (mp4)
|
|
||||||
ydl = YDL()
|
|
||||||
ydl.params['prefer_free_formats'] = True
|
|
||||||
formats = [
|
|
||||||
{'ext': 'webm', 'height': 720, 'url': TEST_URL},
|
|
||||||
{'ext': 'mp4', 'height': 1080, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict['formats'] = formats
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['ext'], 'mp4')
|
|
||||||
|
|
||||||
# No prefer_free_formats => prefer mp4 and flv for greater compatibility
|
|
||||||
ydl = YDL()
|
|
||||||
ydl.params['prefer_free_formats'] = False
|
|
||||||
formats = [
|
|
||||||
{'ext': 'webm', 'height': 720, 'url': TEST_URL},
|
|
||||||
{'ext': 'mp4', 'height': 720, 'url': TEST_URL},
|
|
||||||
{'ext': 'flv', 'height': 720, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict['formats'] = formats
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['ext'], 'mp4')
|
|
||||||
|
|
||||||
ydl = YDL()
|
|
||||||
ydl.params['prefer_free_formats'] = False
|
|
||||||
formats = [
|
|
||||||
{'ext': 'flv', 'height': 720, 'url': TEST_URL},
|
|
||||||
{'ext': 'webm', 'height': 720, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict['formats'] = formats
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['ext'], 'flv')
|
|
||||||
|
|
||||||
def test_format_selection(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': '35', 'ext': 'mp4', 'preference': 1, 'url': TEST_URL},
|
|
||||||
{'format_id': 'example-with-dashes', 'ext': 'webm', 'preference': 1, 'url': TEST_URL},
|
|
||||||
{'format_id': '45', 'ext': 'webm', 'preference': 2, 'url': TEST_URL},
|
|
||||||
{'format_id': '47', 'ext': 'webm', 'preference': 3, 'url': TEST_URL},
|
|
||||||
{'format_id': '2', 'ext': 'flv', 'preference': 4, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': '20/47'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], '47')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '20/71/worst'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], '35')
|
|
||||||
|
|
||||||
ydl = YDL()
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], '2')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'webm/mp4'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], '47')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '3gp/40/mp4'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], '35')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'example-with-dashes'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'example-with-dashes')
|
|
||||||
|
|
||||||
def test_format_selection_audio(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'audio-low', 'ext': 'webm', 'preference': 1, 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'audio-mid', 'ext': 'webm', 'preference': 2, 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'audio-high', 'ext': 'flv', 'preference': 3, 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'vid', 'ext': 'mp4', 'preference': 4, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'bestaudio'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'audio-high')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'worstaudio'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'audio-low')
|
|
||||||
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'vid-low', 'ext': 'mp4', 'preference': 1, 'url': TEST_URL},
|
|
||||||
{'format_id': 'vid-high', 'ext': 'mp4', 'preference': 2, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'bestaudio/worstaudio/best'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'vid-high')
|
|
||||||
|
|
||||||
def test_format_selection_audio_exts(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'mp3-64', 'ext': 'mp3', 'abr': 64, 'url': 'http://_', 'vcodec': 'none'},
|
|
||||||
{'format_id': 'ogg-64', 'ext': 'ogg', 'abr': 64, 'url': 'http://_', 'vcodec': 'none'},
|
|
||||||
{'format_id': 'aac-64', 'ext': 'aac', 'abr': 64, 'url': 'http://_', 'vcodec': 'none'},
|
|
||||||
{'format_id': 'mp3-32', 'ext': 'mp3', 'abr': 32, 'url': 'http://_', 'vcodec': 'none'},
|
|
||||||
{'format_id': 'aac-32', 'ext': 'aac', 'abr': 32, 'url': 'http://_', 'vcodec': 'none'},
|
|
||||||
]
|
|
||||||
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
ydl = YDL({'format': 'best'})
|
|
||||||
ie = YoutubeIE(ydl)
|
|
||||||
ie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(copy.deepcopy(info_dict))
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'aac-64')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'mp3'})
|
|
||||||
ie = YoutubeIE(ydl)
|
|
||||||
ie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(copy.deepcopy(info_dict))
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'mp3-64')
|
|
||||||
|
|
||||||
ydl = YDL({'prefer_free_formats': True})
|
|
||||||
ie = YoutubeIE(ydl)
|
|
||||||
ie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(copy.deepcopy(info_dict))
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'ogg-64')
|
|
||||||
|
|
||||||
def test_format_selection_video(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'dash-video-low', 'ext': 'mp4', 'preference': 1, 'acodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'dash-video-high', 'ext': 'mp4', 'preference': 2, 'acodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'vid', 'ext': 'mp4', 'preference': 3, 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'bestvideo'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'dash-video-high')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'worstvideo'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'dash-video-low')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'bestvideo[format_id^=dash][format_id$=low]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'dash-video-low')
|
|
||||||
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'vid-vcodec-dot', 'ext': 'mp4', 'preference': 1, 'vcodec': 'avc1.123456', 'acodec': 'none', 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'bestvideo[vcodec=avc1.123456]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'vid-vcodec-dot')
|
|
||||||
|
|
||||||
def test_format_selection_string_ops(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'abc-cba', 'ext': 'mp4', 'url': TEST_URL},
|
|
||||||
{'format_id': 'zxc-cxz', 'ext': 'webm', 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
# equals (=)
|
|
||||||
ydl = YDL({'format': '[format_id=abc-cba]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'abc-cba')
|
|
||||||
|
|
||||||
# does not equal (!=)
|
|
||||||
ydl = YDL({'format': '[format_id!=abc-cba]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'zxc-cxz')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[format_id!=abc-cba][format_id!=zxc-cxz]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
# starts with (^=)
|
|
||||||
ydl = YDL({'format': '[format_id^=abc]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'abc-cba')
|
|
||||||
|
|
||||||
# does not start with (!^=)
|
|
||||||
ydl = YDL({'format': '[format_id!^=abc]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'zxc-cxz')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[format_id!^=abc][format_id!^=zxc]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
# ends with ($=)
|
|
||||||
ydl = YDL({'format': '[format_id$=cba]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'abc-cba')
|
|
||||||
|
|
||||||
# does not end with (!$=)
|
|
||||||
ydl = YDL({'format': '[format_id!$=cba]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'zxc-cxz')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[format_id!$=cba][format_id!$=cxz]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
# contains (*=)
|
|
||||||
ydl = YDL({'format': '[format_id*=bc-cb]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'abc-cba')
|
|
||||||
|
|
||||||
# does not contain (!*=)
|
|
||||||
ydl = YDL({'format': '[format_id!*=bc-cb]'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'zxc-cxz')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[format_id!*=abc][format_id!*=zxc]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[format_id!*=-]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
def test_youtube_format_selection(self):
|
|
||||||
order = [
|
|
||||||
'38', '37', '46', '22', '45', '35', '44', '18', '34', '43', '6', '5', '17', '36', '13',
|
|
||||||
# Apple HTTP Live Streaming
|
|
||||||
'96', '95', '94', '93', '92', '132', '151',
|
|
||||||
# 3D
|
|
||||||
'85', '84', '102', '83', '101', '82', '100',
|
|
||||||
# Dash video
|
|
||||||
'137', '248', '136', '247', '135', '246',
|
|
||||||
'245', '244', '134', '243', '133', '242', '160',
|
|
||||||
# Dash audio
|
|
||||||
'141', '172', '140', '171', '139',
|
|
||||||
]
|
|
||||||
|
|
||||||
def format_info(f_id):
|
|
||||||
info = YoutubeIE._formats[f_id].copy()
|
|
||||||
|
|
||||||
# XXX: In real cases InfoExtractor._parse_mpd_formats() fills up 'acodec'
|
|
||||||
# and 'vcodec', while in tests such information is incomplete since
|
|
||||||
# commit a6c2c24479e5f4827ceb06f64d855329c0a6f593
|
|
||||||
# test_YoutubeDL.test_youtube_format_selection is broken without
|
|
||||||
# this fix
|
|
||||||
if 'acodec' in info and 'vcodec' not in info:
|
|
||||||
info['vcodec'] = 'none'
|
|
||||||
elif 'vcodec' in info and 'acodec' not in info:
|
|
||||||
info['acodec'] = 'none'
|
|
||||||
|
|
||||||
info['format_id'] = f_id
|
|
||||||
info['url'] = 'url:' + f_id
|
|
||||||
return info
|
|
||||||
formats_order = [format_info(f_id) for f_id in order]
|
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
|
||||||
ydl = YDL({'format': 'bestvideo+bestaudio'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], '137+141')
|
|
||||||
self.assertEqual(downloaded['ext'], 'mp4')
|
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
|
||||||
ydl = YDL({'format': 'bestvideo[height>=999999]+bestaudio/best'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], '38')
|
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
|
||||||
ydl = YDL({'format': 'bestvideo/best,bestaudio'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
|
||||||
self.assertEqual(downloaded_ids, ['137', '141'])
|
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
|
||||||
ydl = YDL({'format': '(bestvideo[ext=mp4],bestvideo[ext=webm])+bestaudio'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
|
||||||
self.assertEqual(downloaded_ids, ['137+141', '248+141'])
|
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
|
||||||
ydl = YDL({'format': '(bestvideo[ext=mp4],bestvideo[ext=webm])[height<=720]+bestaudio'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
|
||||||
self.assertEqual(downloaded_ids, ['136+141', '247+141'])
|
|
||||||
|
|
||||||
info_dict = _make_result(list(formats_order), extractor='youtube')
|
|
||||||
ydl = YDL({'format': '(bestvideo[ext=none]/bestvideo[ext=webm])+bestaudio'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
|
||||||
self.assertEqual(downloaded_ids, ['248+141'])
|
|
||||||
|
|
||||||
for f1, f2 in zip(formats_order, formats_order[1:]):
|
|
||||||
info_dict = _make_result([f1, f2], extractor='youtube')
|
|
||||||
ydl = YDL({'format': 'best/bestvideo'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], f1['format_id'])
|
|
||||||
|
|
||||||
info_dict = _make_result([f2, f1], extractor='youtube')
|
|
||||||
ydl = YDL({'format': 'best/bestvideo'})
|
|
||||||
yie = YoutubeIE(ydl)
|
|
||||||
yie._sort_formats(info_dict['formats'])
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], f1['format_id'])
|
|
||||||
|
|
||||||
def test_audio_only_extractor_format_selection(self):
|
|
||||||
# For extractors with incomplete formats (all formats are audio-only or
|
|
||||||
# video-only) best and worst should fallback to corresponding best/worst
|
|
||||||
# video-only or audio-only formats (as per
|
|
||||||
# https://github.com/ytdl-org/youtube-dl/pull/5556)
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'low', 'ext': 'mp3', 'preference': 1, 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'high', 'ext': 'mp3', 'preference': 2, 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'high')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'worst'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'low')
|
|
||||||
|
|
||||||
def test_format_not_available(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'regular', 'ext': 'mp4', 'height': 360, 'url': TEST_URL},
|
|
||||||
{'format_id': 'video', 'ext': 'mp4', 'height': 720, 'acodec': 'none', 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
# This must fail since complete video-audio format does not match filter
|
|
||||||
# and extractor does not provide incomplete only formats (i.e. only
|
|
||||||
# video-only or audio-only).
|
|
||||||
ydl = YDL({'format': 'best[height>360]'})
|
|
||||||
self.assertRaises(ExtractorError, ydl.process_ie_result, info_dict.copy())
|
|
||||||
|
|
||||||
def test_format_selection_issue_10083(self):
|
|
||||||
# See https://github.com/ytdl-org/youtube-dl/issues/10083
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'regular', 'height': 360, 'url': TEST_URL},
|
|
||||||
{'format_id': 'video', 'height': 720, 'acodec': 'none', 'url': TEST_URL},
|
|
||||||
{'format_id': 'audio', 'vcodec': 'none', 'url': TEST_URL},
|
|
||||||
]
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best[height>360]/bestvideo[height>360]+bestaudio'})
|
|
||||||
ydl.process_ie_result(info_dict.copy())
|
|
||||||
self.assertEqual(ydl.downloaded_info_dicts[0]['format_id'], 'video+audio')
|
|
||||||
|
|
||||||
def test_invalid_format_specs(self):
|
|
||||||
def assert_syntax_error(format_spec):
|
|
||||||
ydl = YDL({'format': format_spec})
|
|
||||||
info_dict = _make_result([{'format_id': 'foo', 'url': TEST_URL}])
|
|
||||||
self.assertRaises(SyntaxError, ydl.process_ie_result, info_dict)
|
|
||||||
|
|
||||||
assert_syntax_error('bestvideo,,best')
|
|
||||||
assert_syntax_error('+bestaudio')
|
|
||||||
assert_syntax_error('bestvideo+')
|
|
||||||
assert_syntax_error('/')
|
|
||||||
|
|
||||||
def test_format_filtering(self):
|
|
||||||
formats = [
|
|
||||||
{'format_id': 'A', 'filesize': 500, 'width': 1000},
|
|
||||||
{'format_id': 'B', 'filesize': 1000, 'width': 500},
|
|
||||||
{'format_id': 'C', 'filesize': 1000, 'width': 400},
|
|
||||||
{'format_id': 'D', 'filesize': 2000, 'width': 600},
|
|
||||||
{'format_id': 'E', 'filesize': 3000},
|
|
||||||
{'format_id': 'F'},
|
|
||||||
{'format_id': 'G', 'filesize': 1000000},
|
|
||||||
]
|
|
||||||
for f in formats:
|
|
||||||
f['url'] = 'http://_/'
|
|
||||||
f['ext'] = 'unknown'
|
|
||||||
info_dict = _make_result(formats)
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best[filesize<3000]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'D')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best[filesize<=3000]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'E')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best[filesize <= ? 3000]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'F')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best [filesize = 1000] [width>450]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'B')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best [filesize = 1000] [width!=450]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'C')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[filesize>?1]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'G')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[filesize<1M]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'E')
|
|
||||||
|
|
||||||
ydl = YDL({'format': '[filesize<1MiB]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['format_id'], 'G')
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'all[width>=400][width<=600]'})
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
|
||||||
self.assertEqual(downloaded_ids, ['B', 'C', 'D'])
|
|
||||||
|
|
||||||
ydl = YDL({'format': 'best[height<40]'})
|
|
||||||
try:
|
|
||||||
ydl.process_ie_result(info_dict)
|
|
||||||
except ExtractorError:
|
|
||||||
pass
|
|
||||||
self.assertEqual(ydl.downloaded_info_dicts, [])
|
|
||||||
|
|
||||||
def test_default_format_spec(self):
|
|
||||||
ydl = YDL({'simulate': True})
|
|
||||||
self.assertEqual(ydl._default_format_spec({}), 'bestvideo+bestaudio/best')
|
|
||||||
|
|
||||||
ydl = YDL({})
|
|
||||||
self.assertEqual(ydl._default_format_spec({'is_live': True}), 'best/bestvideo+bestaudio')
|
|
||||||
|
|
||||||
ydl = YDL({'simulate': True})
|
|
||||||
self.assertEqual(ydl._default_format_spec({'is_live': True}), 'bestvideo+bestaudio/best')
|
|
||||||
|
|
||||||
ydl = YDL({'outtmpl': '-'})
|
|
||||||
self.assertEqual(ydl._default_format_spec({}), 'best/bestvideo+bestaudio')
|
|
||||||
|
|
||||||
ydl = YDL({})
|
|
||||||
self.assertEqual(ydl._default_format_spec({}, download=False), 'bestvideo+bestaudio/best')
|
|
||||||
self.assertEqual(ydl._default_format_spec({'is_live': True}), 'best/bestvideo+bestaudio')
|
|
||||||
|
|
||||||
|
|
||||||
class TestYoutubeDL(unittest.TestCase):
|
|
||||||
def test_subtitles(self):
|
|
||||||
def s_formats(lang, autocaption=False):
|
|
||||||
return [{
|
|
||||||
'ext': ext,
|
|
||||||
'url': 'http://localhost/video.%s.%s' % (lang, ext),
|
|
||||||
'_auto': autocaption,
|
|
||||||
} for ext in ['vtt', 'srt', 'ass']]
|
|
||||||
subtitles = dict((l, s_formats(l)) for l in ['en', 'fr', 'es'])
|
|
||||||
auto_captions = dict((l, s_formats(l, True)) for l in ['it', 'pt', 'es'])
|
|
||||||
info_dict = {
|
|
||||||
'id': 'test',
|
|
||||||
'title': 'Test',
|
|
||||||
'url': 'http://localhost/video.mp4',
|
|
||||||
'subtitles': subtitles,
|
|
||||||
'automatic_captions': auto_captions,
|
|
||||||
'extractor': 'TEST',
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_info(params={}):
|
|
||||||
params.setdefault('simulate', True)
|
|
||||||
ydl = YDL(params)
|
|
||||||
ydl.report_warning = lambda *args, **kargs: None
|
|
||||||
return ydl.process_video_result(info_dict, download=False)
|
|
||||||
|
|
||||||
result = get_info()
|
|
||||||
self.assertFalse(result.get('requested_subtitles'))
|
|
||||||
self.assertEqual(result['subtitles'], subtitles)
|
|
||||||
self.assertEqual(result['automatic_captions'], auto_captions)
|
|
||||||
|
|
||||||
result = get_info({'writesubtitles': True})
|
|
||||||
subs = result['requested_subtitles']
|
|
||||||
self.assertTrue(subs)
|
|
||||||
self.assertEqual(set(subs.keys()), set(['en']))
|
|
||||||
self.assertTrue(subs['en'].get('data') is None)
|
|
||||||
self.assertEqual(subs['en']['ext'], 'ass')
|
|
||||||
|
|
||||||
result = get_info({'writesubtitles': True, 'subtitlesformat': 'foo/srt'})
|
|
||||||
subs = result['requested_subtitles']
|
|
||||||
self.assertEqual(subs['en']['ext'], 'srt')
|
|
||||||
|
|
||||||
result = get_info({'writesubtitles': True, 'subtitleslangs': ['es', 'fr', 'it']})
|
|
||||||
subs = result['requested_subtitles']
|
|
||||||
self.assertTrue(subs)
|
|
||||||
self.assertEqual(set(subs.keys()), set(['es', 'fr']))
|
|
||||||
|
|
||||||
result = get_info({'writesubtitles': True, 'writeautomaticsub': True, 'subtitleslangs': ['es', 'pt']})
|
|
||||||
subs = result['requested_subtitles']
|
|
||||||
self.assertTrue(subs)
|
|
||||||
self.assertEqual(set(subs.keys()), set(['es', 'pt']))
|
|
||||||
self.assertFalse(subs['es']['_auto'])
|
|
||||||
self.assertTrue(subs['pt']['_auto'])
|
|
||||||
|
|
||||||
result = get_info({'writeautomaticsub': True, 'subtitleslangs': ['es', 'pt']})
|
|
||||||
subs = result['requested_subtitles']
|
|
||||||
self.assertTrue(subs)
|
|
||||||
self.assertEqual(set(subs.keys()), set(['es', 'pt']))
|
|
||||||
self.assertTrue(subs['es']['_auto'])
|
|
||||||
self.assertTrue(subs['pt']['_auto'])
|
|
||||||
|
|
||||||
def test_add_extra_info(self):
|
|
||||||
test_dict = {
|
|
||||||
'extractor': 'Foo',
|
|
||||||
}
|
|
||||||
extra_info = {
|
|
||||||
'extractor': 'Bar',
|
|
||||||
'playlist': 'funny videos',
|
|
||||||
}
|
|
||||||
YDL.add_extra_info(test_dict, extra_info)
|
|
||||||
self.assertEqual(test_dict['extractor'], 'Foo')
|
|
||||||
self.assertEqual(test_dict['playlist'], 'funny videos')
|
|
||||||
|
|
||||||
def test_prepare_filename(self):
|
|
||||||
info = {
|
|
||||||
'id': '1234',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'width': None,
|
|
||||||
'height': 1080,
|
|
||||||
'title1': '$PATH',
|
|
||||||
'title2': '%PATH%',
|
|
||||||
}
|
|
||||||
|
|
||||||
def fname(templ):
|
|
||||||
ydl = YoutubeDL({'outtmpl': templ})
|
|
||||||
return ydl.prepare_filename(info)
|
|
||||||
self.assertEqual(fname('%(id)s.%(ext)s'), '1234.mp4')
|
|
||||||
self.assertEqual(fname('%(id)s-%(width)s.%(ext)s'), '1234-NA.mp4')
|
|
||||||
# Replace missing fields with 'NA'
|
|
||||||
self.assertEqual(fname('%(uploader_date)s-%(id)s.%(ext)s'), 'NA-1234.mp4')
|
|
||||||
self.assertEqual(fname('%(height)d.%(ext)s'), '1080.mp4')
|
|
||||||
self.assertEqual(fname('%(height)6d.%(ext)s'), ' 1080.mp4')
|
|
||||||
self.assertEqual(fname('%(height)-6d.%(ext)s'), '1080 .mp4')
|
|
||||||
self.assertEqual(fname('%(height)06d.%(ext)s'), '001080.mp4')
|
|
||||||
self.assertEqual(fname('%(height) 06d.%(ext)s'), ' 01080.mp4')
|
|
||||||
self.assertEqual(fname('%(height) 06d.%(ext)s'), ' 01080.mp4')
|
|
||||||
self.assertEqual(fname('%(height)0 6d.%(ext)s'), ' 01080.mp4')
|
|
||||||
self.assertEqual(fname('%(height)0 6d.%(ext)s'), ' 01080.mp4')
|
|
||||||
self.assertEqual(fname('%(height) 0 6d.%(ext)s'), ' 01080.mp4')
|
|
||||||
self.assertEqual(fname('%%'), '%')
|
|
||||||
self.assertEqual(fname('%%%%'), '%%')
|
|
||||||
self.assertEqual(fname('%%(height)06d.%(ext)s'), '%(height)06d.mp4')
|
|
||||||
self.assertEqual(fname('%(width)06d.%(ext)s'), 'NA.mp4')
|
|
||||||
self.assertEqual(fname('%(width)06d.%%(ext)s'), 'NA.%(ext)s')
|
|
||||||
self.assertEqual(fname('%%(width)06d.%(ext)s'), '%(width)06d.mp4')
|
|
||||||
self.assertEqual(fname('Hello %(title1)s'), 'Hello $PATH')
|
|
||||||
self.assertEqual(fname('Hello %(title2)s'), 'Hello %PATH%')
|
|
||||||
|
|
||||||
def test_format_note(self):
|
|
||||||
ydl = YoutubeDL()
|
|
||||||
self.assertEqual(ydl._format_note({}), '')
|
|
||||||
assertRegexpMatches(self, ydl._format_note({
|
|
||||||
'vbr': 10,
|
|
||||||
}), r'^\s*10k$')
|
|
||||||
assertRegexpMatches(self, ydl._format_note({
|
|
||||||
'fps': 30,
|
|
||||||
}), r'^30fps$')
|
|
||||||
|
|
||||||
def test_postprocessors(self):
|
|
||||||
filename = 'post-processor-testfile.mp4'
|
|
||||||
audiofile = filename + '.mp3'
|
|
||||||
|
|
||||||
class SimplePP(PostProcessor):
|
|
||||||
def run(self, info):
|
|
||||||
with open(audiofile, 'wt') as f:
|
|
||||||
f.write('EXAMPLE')
|
|
||||||
return [info['filepath']], info
|
|
||||||
|
|
||||||
def run_pp(params, PP):
|
|
||||||
with open(filename, 'wt') as f:
|
|
||||||
f.write('EXAMPLE')
|
|
||||||
ydl = YoutubeDL(params)
|
|
||||||
ydl.add_post_processor(PP())
|
|
||||||
ydl.post_process(filename, {'filepath': filename})
|
|
||||||
|
|
||||||
run_pp({'keepvideo': True}, SimplePP)
|
|
||||||
self.assertTrue(os.path.exists(filename), '%s doesn\'t exist' % filename)
|
|
||||||
self.assertTrue(os.path.exists(audiofile), '%s doesn\'t exist' % audiofile)
|
|
||||||
os.unlink(filename)
|
|
||||||
os.unlink(audiofile)
|
|
||||||
|
|
||||||
run_pp({'keepvideo': False}, SimplePP)
|
|
||||||
self.assertFalse(os.path.exists(filename), '%s exists' % filename)
|
|
||||||
self.assertTrue(os.path.exists(audiofile), '%s doesn\'t exist' % audiofile)
|
|
||||||
os.unlink(audiofile)
|
|
||||||
|
|
||||||
class ModifierPP(PostProcessor):
|
|
||||||
def run(self, info):
|
|
||||||
with open(info['filepath'], 'wt') as f:
|
|
||||||
f.write('MODIFIED')
|
|
||||||
return [], info
|
|
||||||
|
|
||||||
run_pp({'keepvideo': False}, ModifierPP)
|
|
||||||
self.assertTrue(os.path.exists(filename), '%s doesn\'t exist' % filename)
|
|
||||||
os.unlink(filename)
|
|
||||||
|
|
||||||
def test_match_filter(self):
|
|
||||||
class FilterYDL(YDL):
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
super(FilterYDL, self).__init__(*args, **kwargs)
|
|
||||||
self.params['simulate'] = True
|
|
||||||
|
|
||||||
def process_info(self, info_dict):
|
|
||||||
super(YDL, self).process_info(info_dict)
|
|
||||||
|
|
||||||
def _match_entry(self, info_dict, incomplete):
|
|
||||||
res = super(FilterYDL, self)._match_entry(info_dict, incomplete)
|
|
||||||
if res is None:
|
|
||||||
self.downloaded_info_dicts.append(info_dict)
|
|
||||||
return res
|
|
||||||
|
|
||||||
first = {
|
|
||||||
'id': '1',
|
|
||||||
'url': TEST_URL,
|
|
||||||
'title': 'one',
|
|
||||||
'extractor': 'TEST',
|
|
||||||
'duration': 30,
|
|
||||||
'filesize': 10 * 1024,
|
|
||||||
'playlist_id': '42',
|
|
||||||
'uploader': "變態妍字幕版 太妍 тест",
|
|
||||||
'creator': "тест ' 123 ' тест--",
|
|
||||||
}
|
|
||||||
second = {
|
|
||||||
'id': '2',
|
|
||||||
'url': TEST_URL,
|
|
||||||
'title': 'two',
|
|
||||||
'extractor': 'TEST',
|
|
||||||
'duration': 10,
|
|
||||||
'description': 'foo',
|
|
||||||
'filesize': 5 * 1024,
|
|
||||||
'playlist_id': '43',
|
|
||||||
'uploader': "тест 123",
|
|
||||||
}
|
|
||||||
videos = [first, second]
|
|
||||||
|
|
||||||
def get_videos(filter_=None):
|
|
||||||
ydl = FilterYDL({'match_filter': filter_})
|
|
||||||
for v in videos:
|
|
||||||
ydl.process_ie_result(v, download=True)
|
|
||||||
return [v['id'] for v in ydl.downloaded_info_dicts]
|
|
||||||
|
|
||||||
res = get_videos()
|
|
||||||
self.assertEqual(res, ['1', '2'])
|
|
||||||
|
|
||||||
def f(v):
|
|
||||||
if v['id'] == '1':
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
return 'Video id is not 1'
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func('duration < 30')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['2'])
|
|
||||||
|
|
||||||
f = match_filter_func('description = foo')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['2'])
|
|
||||||
|
|
||||||
f = match_filter_func('description =? foo')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1', '2'])
|
|
||||||
|
|
||||||
f = match_filter_func('filesize > 5KiB')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func('playlist_id = 42')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func('uploader = "變態妍字幕版 太妍 тест"')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func('uploader != "變態妍字幕版 太妍 тест"')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['2'])
|
|
||||||
|
|
||||||
f = match_filter_func('creator = "тест \' 123 \' тест--"')
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func("creator = 'тест \\' 123 \\' тест--'")
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, ['1'])
|
|
||||||
|
|
||||||
f = match_filter_func(r"creator = 'тест \' 123 \' тест--' & duration > 30")
|
|
||||||
res = get_videos(f)
|
|
||||||
self.assertEqual(res, [])
|
|
||||||
|
|
||||||
def test_playlist_items_selection(self):
|
|
||||||
entries = [{
|
|
||||||
'id': compat_str(i),
|
|
||||||
'title': compat_str(i),
|
|
||||||
'url': TEST_URL,
|
|
||||||
} for i in range(1, 5)]
|
|
||||||
playlist = {
|
|
||||||
'_type': 'playlist',
|
|
||||||
'id': 'test',
|
|
||||||
'entries': entries,
|
|
||||||
'extractor': 'test:playlist',
|
|
||||||
'extractor_key': 'test:playlist',
|
|
||||||
'webpage_url': 'http://example.com',
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_downloaded_info_dicts(params):
|
|
||||||
ydl = YDL(params)
|
|
||||||
# make a deep copy because the dictionary and nested entries
|
|
||||||
# can be modified
|
|
||||||
ydl.process_ie_result(copy.deepcopy(playlist))
|
|
||||||
return ydl.downloaded_info_dicts
|
|
||||||
|
|
||||||
def get_ids(params):
|
|
||||||
return [int(v['id']) for v in get_downloaded_info_dicts(params)]
|
|
||||||
|
|
||||||
result = get_ids({})
|
|
||||||
self.assertEqual(result, [1, 2, 3, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlistend': 10})
|
|
||||||
self.assertEqual(result, [1, 2, 3, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlistend': 2})
|
|
||||||
self.assertEqual(result, [1, 2])
|
|
||||||
|
|
||||||
result = get_ids({'playliststart': 10})
|
|
||||||
self.assertEqual(result, [])
|
|
||||||
|
|
||||||
result = get_ids({'playliststart': 2})
|
|
||||||
self.assertEqual(result, [2, 3, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlist_items': '2-4'})
|
|
||||||
self.assertEqual(result, [2, 3, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlist_items': '2,4'})
|
|
||||||
self.assertEqual(result, [2, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlist_items': '10'})
|
|
||||||
self.assertEqual(result, [])
|
|
||||||
|
|
||||||
result = get_ids({'playlist_items': '3-10'})
|
|
||||||
self.assertEqual(result, [3, 4])
|
|
||||||
|
|
||||||
result = get_ids({'playlist_items': '2-4,3-4,3'})
|
|
||||||
self.assertEqual(result, [2, 3, 4])
|
|
||||||
|
|
||||||
# Tests for https://github.com/ytdl-org/youtube-dl/issues/10591
|
|
||||||
# @{
|
|
||||||
result = get_downloaded_info_dicts({'playlist_items': '2-4,3-4,3'})
|
|
||||||
self.assertEqual(result[0]['playlist_index'], 2)
|
|
||||||
self.assertEqual(result[1]['playlist_index'], 3)
|
|
||||||
|
|
||||||
result = get_downloaded_info_dicts({'playlist_items': '2-4,3-4,3'})
|
|
||||||
self.assertEqual(result[0]['playlist_index'], 2)
|
|
||||||
self.assertEqual(result[1]['playlist_index'], 3)
|
|
||||||
self.assertEqual(result[2]['playlist_index'], 4)
|
|
||||||
|
|
||||||
result = get_downloaded_info_dicts({'playlist_items': '4,2'})
|
|
||||||
self.assertEqual(result[0]['playlist_index'], 4)
|
|
||||||
self.assertEqual(result[1]['playlist_index'], 2)
|
|
||||||
# @}
|
|
||||||
|
|
||||||
def test_urlopen_no_file_protocol(self):
|
|
||||||
# see https://github.com/ytdl-org/youtube-dl/issues/8227
|
|
||||||
ydl = YDL()
|
|
||||||
self.assertRaises(compat_urllib_error.URLError, ydl.urlopen, 'file:///etc/passwd')
|
|
||||||
|
|
||||||
def test_do_not_override_ie_key_in_url_transparent(self):
|
|
||||||
ydl = YDL()
|
|
||||||
|
|
||||||
class Foo1IE(InfoExtractor):
|
|
||||||
_VALID_URL = r'foo1:'
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
return {
|
|
||||||
'_type': 'url_transparent',
|
|
||||||
'url': 'foo2:',
|
|
||||||
'ie_key': 'Foo2',
|
|
||||||
'title': 'foo1 title',
|
|
||||||
'id': 'foo1_id',
|
|
||||||
}
|
|
||||||
|
|
||||||
class Foo2IE(InfoExtractor):
|
|
||||||
_VALID_URL = r'foo2:'
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
return {
|
|
||||||
'_type': 'url',
|
|
||||||
'url': 'foo3:',
|
|
||||||
'ie_key': 'Foo3',
|
|
||||||
}
|
|
||||||
|
|
||||||
class Foo3IE(InfoExtractor):
|
|
||||||
_VALID_URL = r'foo3:'
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
return _make_result([{'url': TEST_URL}], title='foo3 title')
|
|
||||||
|
|
||||||
ydl.add_info_extractor(Foo1IE(ydl))
|
|
||||||
ydl.add_info_extractor(Foo2IE(ydl))
|
|
||||||
ydl.add_info_extractor(Foo3IE(ydl))
|
|
||||||
ydl.extract_info('foo1:')
|
|
||||||
downloaded = ydl.downloaded_info_dicts[0]
|
|
||||||
self.assertEqual(downloaded['url'], TEST_URL)
|
|
||||||
self.assertEqual(downloaded['title'], 'foo1 title')
|
|
||||||
self.assertEqual(downloaded['id'], 'testid')
|
|
||||||
self.assertEqual(downloaded['extractor'], 'testex')
|
|
||||||
self.assertEqual(downloaded['extractor_key'], 'TestEx')
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
|
@ -1,51 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
import tempfile
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.utils import YoutubeDLCookieJar
|
|
||||||
|
|
||||||
|
|
||||||
class TestYoutubeDLCookieJar(unittest.TestCase):
|
|
||||||
def test_keep_session_cookies(self):
|
|
||||||
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/session_cookies.txt')
|
|
||||||
cookiejar.load(ignore_discard=True, ignore_expires=True)
|
|
||||||
tf = tempfile.NamedTemporaryFile(delete=False)
|
|
||||||
try:
|
|
||||||
cookiejar.save(filename=tf.name, ignore_discard=True, ignore_expires=True)
|
|
||||||
temp = tf.read().decode('utf-8')
|
|
||||||
self.assertTrue(re.search(
|
|
||||||
r'www\.foobar\.foobar\s+FALSE\s+/\s+TRUE\s+0\s+YoutubeDLExpiresEmpty\s+YoutubeDLExpiresEmptyValue', temp))
|
|
||||||
self.assertTrue(re.search(
|
|
||||||
r'www\.foobar\.foobar\s+FALSE\s+/\s+TRUE\s+0\s+YoutubeDLExpires0\s+YoutubeDLExpires0Value', temp))
|
|
||||||
finally:
|
|
||||||
tf.close()
|
|
||||||
os.remove(tf.name)
|
|
||||||
|
|
||||||
def test_strip_httponly_prefix(self):
|
|
||||||
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/httponly_cookies.txt')
|
|
||||||
cookiejar.load(ignore_discard=True, ignore_expires=True)
|
|
||||||
|
|
||||||
def assert_cookie_has_value(key):
|
|
||||||
self.assertEqual(cookiejar._cookies['www.foobar.foobar']['/'][key].value, key + '_VALUE')
|
|
||||||
|
|
||||||
assert_cookie_has_value('HTTPONLY_COOKIE')
|
|
||||||
assert_cookie_has_value('JS_ACCESSIBLE_COOKIE')
|
|
||||||
|
|
||||||
def test_malformed_cookies(self):
|
|
||||||
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/malformed_cookies.txt')
|
|
||||||
cookiejar.load(ignore_discard=True, ignore_expires=True)
|
|
||||||
# Cookies should be empty since all malformed cookie file entries
|
|
||||||
# will be ignored
|
|
||||||
self.assertFalse(cookiejar._cookies)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
|
@ -1,63 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.aes import aes_decrypt, aes_encrypt, aes_cbc_decrypt, aes_cbc_encrypt, aes_decrypt_text
|
|
||||||
from youtube_dl.utils import bytes_to_intlist, intlist_to_bytes
|
|
||||||
import base64
|
|
||||||
|
|
||||||
# the encrypted data can be generate with 'devscripts/generate_aes_testdata.py'
|
|
||||||
|
|
||||||
|
|
||||||
class TestAES(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.key = self.iv = [0x20, 0x15] + 14 * [0]
|
|
||||||
self.secret_msg = b'Secret message goes here'
|
|
||||||
|
|
||||||
def test_encrypt(self):
|
|
||||||
msg = b'message'
|
|
||||||
key = list(range(16))
|
|
||||||
encrypted = aes_encrypt(bytes_to_intlist(msg), key)
|
|
||||||
decrypted = intlist_to_bytes(aes_decrypt(encrypted, key))
|
|
||||||
self.assertEqual(decrypted, msg)
|
|
||||||
|
|
||||||
def test_cbc_decrypt(self):
|
|
||||||
data = bytes_to_intlist(
|
|
||||||
b"\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd"
|
|
||||||
)
|
|
||||||
decrypted = intlist_to_bytes(aes_cbc_decrypt(data, self.key, self.iv))
|
|
||||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
|
||||||
|
|
||||||
def test_cbc_encrypt(self):
|
|
||||||
data = bytes_to_intlist(self.secret_msg)
|
|
||||||
encrypted = intlist_to_bytes(aes_cbc_encrypt(data, self.key, self.iv))
|
|
||||||
self.assertEqual(
|
|
||||||
encrypted,
|
|
||||||
b"\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd")
|
|
||||||
|
|
||||||
def test_decrypt_text(self):
|
|
||||||
password = intlist_to_bytes(self.key).decode('utf-8')
|
|
||||||
encrypted = base64.b64encode(
|
|
||||||
intlist_to_bytes(self.iv[:8])
|
|
||||||
+ b'\x17\x15\x93\xab\x8d\x80V\xcdV\xe0\t\xcdo\xc2\xa5\xd8ksM\r\xe27N\xae'
|
|
||||||
).decode('utf-8')
|
|
||||||
decrypted = (aes_decrypt_text(encrypted, password, 16))
|
|
||||||
self.assertEqual(decrypted, self.secret_msg)
|
|
||||||
|
|
||||||
password = intlist_to_bytes(self.key).decode('utf-8')
|
|
||||||
encrypted = base64.b64encode(
|
|
||||||
intlist_to_bytes(self.iv[:8])
|
|
||||||
+ b'\x0b\xe6\xa4\xd9z\x0e\xb8\xb9\xd0\xd4i_\x85\x1d\x99\x98_\xe5\x80\xe7.\xbf\xa5\x83'
|
|
||||||
).decode('utf-8')
|
|
||||||
decrypted = (aes_decrypt_text(encrypted, password, 32))
|
|
||||||
self.assertEqual(decrypted, self.secret_msg)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
|
@ -1,50 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from test.helper import try_rm
|
|
||||||
|
|
||||||
|
|
||||||
from youtube_dl import YoutubeDL
|
|
||||||
|
|
||||||
|
|
||||||
def _download_restricted(url, filename, age):
|
|
||||||
""" Returns true if the file has been downloaded """
|
|
||||||
|
|
||||||
params = {
|
|
||||||
'age_limit': age,
|
|
||||||
'skip_download': True,
|
|
||||||
'writeinfojson': True,
|
|
||||||
'outtmpl': '%(id)s.%(ext)s',
|
|
||||||
}
|
|
||||||
ydl = YoutubeDL(params)
|
|
||||||
ydl.add_default_info_extractors()
|
|
||||||
json_filename = os.path.splitext(filename)[0] + '.info.json'
|
|
||||||
try_rm(json_filename)
|
|
||||||
ydl.download([url])
|
|
||||||
res = os.path.exists(json_filename)
|
|
||||||
try_rm(json_filename)
|
|
||||||
return res
|
|
||||||
|
|
||||||
|
|
||||||
class TestAgeRestriction(unittest.TestCase):
|
|
||||||
def _assert_restricted(self, url, filename, age, old_age=None):
|
|
||||||
self.assertTrue(_download_restricted(url, filename, old_age))
|
|
||||||
self.assertFalse(_download_restricted(url, filename, age))
|
|
||||||
|
|
||||||
def test_youtube(self):
|
|
||||||
self._assert_restricted('07FYdnEawAQ', '07FYdnEawAQ.mp4', 10)
|
|
||||||
|
|
||||||
def test_youporn(self):
|
|
||||||
self._assert_restricted(
|
|
||||||
'http://www.youporn.com/watch/505835/sex-ed-is-it-safe-to-masturbate-daily/',
|
|
||||||
'505835.mp4', 2, old_age=25)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
|
@ -1,133 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
import collections
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
|
|
||||||
from test.helper import gettestcases
|
|
||||||
|
|
||||||
from youtube_dl.extractor import (
|
|
||||||
FacebookIE,
|
|
||||||
gen_extractors,
|
|
||||||
YoutubeIE,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestAllURLsMatching(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.ies = gen_extractors()
|
|
||||||
|
|
||||||
def matching_ies(self, url):
|
|
||||||
return [ie.IE_NAME for ie in self.ies if ie.suitable(url) and ie.IE_NAME != 'generic']
|
|
||||||
|
|
||||||
def assertMatch(self, url, ie_list):
|
|
||||||
self.assertEqual(self.matching_ies(url), ie_list)
|
|
||||||
|
|
||||||
def test_youtube_playlist_matching(self):
|
|
||||||
assertPlaylist = lambda url: self.assertMatch(url, ['youtube:playlist'])
|
|
||||||
assertPlaylist('ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
|
|
||||||
assertPlaylist('UUBABnxM4Ar9ten8Mdjj1j0Q') # 585
|
|
||||||
assertPlaylist('PL63F0C78739B09958')
|
|
||||||
# assertPlaylist('https://www.youtube.com/playlist?list=UUBABnxM4Ar9ten8Mdjj1j0Q')
|
|
||||||
assertPlaylist('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
|
|
||||||
# assertPlaylist('https://www.youtube.com/playlist?list=PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC')
|
|
||||||
assertPlaylist('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012') # 668
|
|
||||||
self.assertFalse('youtube:playlist' in self.matching_ies('PLtS2H6bU1M'))
|
|
||||||
# Top tracks
|
|
||||||
# assertPlaylist('https://www.youtube.com/playlist?list=MCUS.20142101')
|
|
||||||
|
|
||||||
def test_youtube_matching(self):
|
|
||||||
self.assertTrue(YoutubeIE.suitable('PLtS2H6bU1M'))
|
|
||||||
self.assertFalse(YoutubeIE.suitable('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012')) # 668
|
|
||||||
self.assertMatch('http://youtu.be/BaW_jenozKc', ['youtube'])
|
|
||||||
self.assertMatch('http://www.youtube.com/v/BaW_jenozKc', ['youtube'])
|
|
||||||
self.assertMatch('https://youtube.googleapis.com/v/BaW_jenozKc', ['youtube'])
|
|
||||||
self.assertMatch('http://www.cleanvideosearch.com/media/action/yt/watch?videoId=8v_4O44sfjM', ['youtube'])
|
|
||||||
|
|
||||||
def test_youtube_channel_matching(self):
|
|
||||||
assertChannel = lambda url: self.assertMatch(url, ['youtube:tab'])
|
|
||||||
assertChannel('https://www.youtube.com/channel/HCtnHdj3df7iM')
|
|
||||||
assertChannel('https://www.youtube.com/channel/HCtnHdj3df7iM?feature=gb_ch_rec')
|
|
||||||
assertChannel('https://www.youtube.com/channel/HCtnHdj3df7iM/videos')
|
|
||||||
|
|
||||||
# def test_youtube_user_matching(self):
|
|
||||||
# self.assertMatch('http://www.youtube.com/NASAgovVideo/videos', ['youtube:tab'])
|
|
||||||
|
|
||||||
def test_youtube_feeds(self):
|
|
||||||
self.assertMatch('https://www.youtube.com/feed/watch_later', ['youtube:watchlater'])
|
|
||||||
self.assertMatch('https://www.youtube.com/feed/subscriptions', ['youtube:subscriptions'])
|
|
||||||
self.assertMatch('https://www.youtube.com/feed/recommended', ['youtube:recommended'])
|
|
||||||
|
|
||||||
# def test_youtube_search_matching(self):
|
|
||||||
# self.assertMatch('http://www.youtube.com/results?search_query=making+mustard', ['youtube:search_url'])
|
|
||||||
# self.assertMatch('https://www.youtube.com/results?baz=bar&search_query=youtube-dl+test+video&filters=video&lclk=video', ['youtube:search_url'])
|
|
||||||
|
|
||||||
def test_youtube_extract(self):
|
|
||||||
assertExtractId = lambda url, id: self.assertEqual(YoutubeIE.extract_id(url), id)
|
|
||||||
assertExtractId('http://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
|
|
||||||
assertExtractId('https://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
|
|
||||||
assertExtractId('https://www.youtube.com/watch?feature=player_embedded&v=BaW_jenozKc', 'BaW_jenozKc')
|
|
||||||
assertExtractId('https://www.youtube.com/watch_popup?v=BaW_jenozKc', 'BaW_jenozKc')
|
|
||||||
assertExtractId('http://www.youtube.com/watch?v=BaW_jenozKcsharePLED17F32AD9753930', 'BaW_jenozKc')
|
|
||||||
assertExtractId('BaW_jenozKc', 'BaW_jenozKc')
|
|
||||||
|
|
||||||
def test_facebook_matching(self):
|
|
||||||
self.assertTrue(FacebookIE.suitable('https://www.facebook.com/Shiniknoh#!/photo.php?v=10153317450565268'))
|
|
||||||
self.assertTrue(FacebookIE.suitable('https://www.facebook.com/cindyweather?fref=ts#!/photo.php?v=10152183998945793'))
|
|
||||||
|
|
||||||
def test_no_duplicates(self):
|
|
||||||
ies = gen_extractors()
|
|
||||||
for tc in gettestcases(include_onlymatching=True):
|
|
||||||
url = tc['url']
|
|
||||||
for ie in ies:
|
|
||||||
if type(ie).__name__ in ('GenericIE', tc['name'] + 'IE'):
|
|
||||||
self.assertTrue(ie.suitable(url), '%s should match URL %r' % (type(ie).__name__, url))
|
|
||||||
else:
|
|
||||||
self.assertFalse(
|
|
||||||
ie.suitable(url),
|
|
||||||
'%s should not match URL %r . That URL belongs to %s.' % (type(ie).__name__, url, tc['name']))
|
|
||||||
|
|
||||||
def test_keywords(self):
|
|
||||||
self.assertMatch(':ytsubs', ['youtube:subscriptions'])
|
|
||||||
self.assertMatch(':ytsubscriptions', ['youtube:subscriptions'])
|
|
||||||
self.assertMatch(':ythistory', ['youtube:history'])
|
|
||||||
|
|
||||||
def test_vimeo_matching(self):
|
|
||||||
self.assertMatch('https://vimeo.com/channels/tributes', ['vimeo:channel'])
|
|
||||||
self.assertMatch('https://vimeo.com/channels/31259', ['vimeo:channel'])
|
|
||||||
self.assertMatch('https://vimeo.com/channels/31259/53576664', ['vimeo'])
|
|
||||||
self.assertMatch('https://vimeo.com/user7108434', ['vimeo:user'])
|
|
||||||
self.assertMatch('https://vimeo.com/user7108434/videos', ['vimeo:user'])
|
|
||||||
self.assertMatch('https://vimeo.com/user21297594/review/75524534/3c257a1b5d', ['vimeo:review'])
|
|
||||||
|
|
||||||
# https://github.com/ytdl-org/youtube-dl/issues/1930
|
|
||||||
def test_soundcloud_not_matching_sets(self):
|
|
||||||
self.assertMatch('http://soundcloud.com/floex/sets/gone-ep', ['soundcloud:set'])
|
|
||||||
|
|
||||||
def test_tumblr(self):
|
|
||||||
self.assertMatch('http://tatianamaslanydaily.tumblr.com/post/54196191430/orphan-black-dvd-extra-behind-the-scenes', ['Tumblr'])
|
|
||||||
self.assertMatch('http://tatianamaslanydaily.tumblr.com/post/54196191430', ['Tumblr'])
|
|
||||||
|
|
||||||
def test_pbs(self):
|
|
||||||
# https://github.com/ytdl-org/youtube-dl/issues/2350
|
|
||||||
self.assertMatch('http://video.pbs.org/viralplayer/2365173446/', ['pbs'])
|
|
||||||
self.assertMatch('http://video.pbs.org/widget/partnerplayer/980042464/', ['pbs'])
|
|
||||||
|
|
||||||
def test_no_duplicated_ie_names(self):
|
|
||||||
name_accu = collections.defaultdict(list)
|
|
||||||
for ie in self.ies:
|
|
||||||
name_accu[ie.IE_NAME.lower()].append(type(ie).__name__)
|
|
||||||
for (ie_name, ie_list) in name_accu.items():
|
|
||||||
self.assertEqual(
|
|
||||||
len(ie_list), 1,
|
|
||||||
'Multiple extractors with the same IE_NAME "%s" (%s)' % (ie_name, ', '.join(ie_list)))
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
|
@ -1,59 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import shutil
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
|
|
||||||
from test.helper import FakeYDL
|
|
||||||
from youtube_dl.cache import Cache
|
|
||||||
|
|
||||||
|
|
||||||
def _is_empty(d):
|
|
||||||
return not bool(os.listdir(d))
|
|
||||||
|
|
||||||
|
|
||||||
def _mkdir(d):
|
|
||||||
if not os.path.exists(d):
|
|
||||||
os.mkdir(d)
|
|
||||||
|
|
||||||
|
|
||||||
class TestCache(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
|
|
||||||
TESTDATA_DIR = os.path.join(TEST_DIR, 'testdata')
|
|
||||||
_mkdir(TESTDATA_DIR)
|
|
||||||
self.test_dir = os.path.join(TESTDATA_DIR, 'cache_test')
|
|
||||||
self.tearDown()
|
|
||||||
|
|
||||||
def tearDown(self):
|
|
||||||
if os.path.exists(self.test_dir):
|
|
||||||
shutil.rmtree(self.test_dir)
|
|
||||||
|
|
||||||
def test_cache(self):
|
|
||||||
ydl = FakeYDL({
|
|
||||||
'cachedir': self.test_dir,
|
|
||||||
})
|
|
||||||
c = Cache(ydl)
|
|
||||||
obj = {'x': 1, 'y': ['ä', '\\a', True]}
|
|
||||||
self.assertEqual(c.load('test_cache', 'k.'), None)
|
|
||||||
c.store('test_cache', 'k.', obj)
|
|
||||||
self.assertEqual(c.load('test_cache', 'k2'), None)
|
|
||||||
self.assertFalse(_is_empty(self.test_dir))
|
|
||||||
self.assertEqual(c.load('test_cache', 'k.'), obj)
|
|
||||||
self.assertEqual(c.load('test_cache', 'y'), None)
|
|
||||||
self.assertEqual(c.load('test_cache2', 'k.'), None)
|
|
||||||
c.remove()
|
|
||||||
self.assertFalse(os.path.exists(self.test_dir))
|
|
||||||
self.assertEqual(c.load('test_cache', 'k.'), None)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
|
@ -1,126 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
|
|
||||||
from youtube_dl.compat import (
|
|
||||||
compat_getenv,
|
|
||||||
compat_setenv,
|
|
||||||
compat_etree_Element,
|
|
||||||
compat_etree_fromstring,
|
|
||||||
compat_expanduser,
|
|
||||||
compat_shlex_split,
|
|
||||||
compat_str,
|
|
||||||
compat_struct_unpack,
|
|
||||||
compat_urllib_parse_unquote,
|
|
||||||
compat_urllib_parse_unquote_plus,
|
|
||||||
compat_urllib_parse_urlencode,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestCompat(unittest.TestCase):
|
|
||||||
def test_compat_getenv(self):
|
|
||||||
test_str = 'тест'
|
|
||||||
compat_setenv('YOUTUBE_DL_COMPAT_GETENV', test_str)
|
|
||||||
self.assertEqual(compat_getenv('YOUTUBE_DL_COMPAT_GETENV'), test_str)
|
|
||||||
|
|
||||||
def test_compat_setenv(self):
|
|
||||||
test_var = 'YOUTUBE_DL_COMPAT_SETENV'
|
|
||||||
test_str = 'тест'
|
|
||||||
compat_setenv(test_var, test_str)
|
|
||||||
compat_getenv(test_var)
|
|
||||||
self.assertEqual(compat_getenv(test_var), test_str)
|
|
||||||
|
|
||||||
def test_compat_expanduser(self):
|
|
||||||
old_home = os.environ.get('HOME')
|
|
||||||
test_str = r'C:\Documents and Settings\тест\Application Data'
|
|
||||||
compat_setenv('HOME', test_str)
|
|
||||||
self.assertEqual(compat_expanduser('~'), test_str)
|
|
||||||
compat_setenv('HOME', old_home or '')
|
|
||||||
|
|
||||||
def test_all_present(self):
|
|
||||||
import youtube_dl.compat
|
|
||||||
all_names = youtube_dl.compat.__all__
|
|
||||||
present_names = set(filter(
|
|
||||||
lambda c: '_' in c and not c.startswith('_'),
|
|
||||||
dir(youtube_dl.compat))) - set(['unicode_literals'])
|
|
||||||
self.assertEqual(all_names, sorted(present_names))
|
|
||||||
|
|
||||||
def test_compat_urllib_parse_unquote(self):
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('abc%20def'), 'abc def')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%7e/abc+def'), '~/abc+def')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote(''), '')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%'), '%')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%%'), '%%')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%%%'), '%%%')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%2F'), '/')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%2f'), '/')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote('%E6%B4%A5%E6%B3%A2'), '津波')
|
|
||||||
self.assertEqual(
|
|
||||||
compat_urllib_parse_unquote('''<meta property="og:description" content="%E2%96%81%E2%96%82%E2%96%83%E2%96%84%25%E2%96%85%E2%96%86%E2%96%87%E2%96%88" />
|
|
||||||
%<a href="https://ar.wikipedia.org/wiki/%D8%AA%D8%B3%D9%88%D9%86%D8%A7%D9%85%D9%8A">%a'''),
|
|
||||||
'''<meta property="og:description" content="▁▂▃▄%▅▆▇█" />
|
|
||||||
%<a href="https://ar.wikipedia.org/wiki/تسونامي">%a''')
|
|
||||||
self.assertEqual(
|
|
||||||
compat_urllib_parse_unquote('''%28%5E%E2%97%A3_%E2%97%A2%5E%29%E3%81%A3%EF%B8%BB%E3%83%87%E2%95%90%E4%B8%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%86%B6%I%Break%25Things%'''),
|
|
||||||
'''(^◣_◢^)っ︻デ═一 ⇀ ⇀ ⇀ ⇀ ⇀ ↶%I%Break%Things%''')
|
|
||||||
|
|
||||||
def test_compat_urllib_parse_unquote_plus(self):
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote_plus('abc%20def'), 'abc def')
|
|
||||||
self.assertEqual(compat_urllib_parse_unquote_plus('%7e/abc+def'), '~/abc def')
|
|
||||||
|
|
||||||
def test_compat_urllib_parse_urlencode(self):
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode({'abc': 'def'}), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode({'abc': b'def'}), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode({b'abc': 'def'}), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode({b'abc': b'def'}), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode([('abc', 'def')]), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode([('abc', b'def')]), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode([(b'abc', 'def')]), 'abc=def')
|
|
||||||
self.assertEqual(compat_urllib_parse_urlencode([(b'abc', b'def')]), 'abc=def')
|
|
||||||
|
|
||||||
def test_compat_shlex_split(self):
|
|
||||||
self.assertEqual(compat_shlex_split('-option "one two"'), ['-option', 'one two'])
|
|
||||||
self.assertEqual(compat_shlex_split('-option "one\ntwo" \n -flag'), ['-option', 'one\ntwo', '-flag'])
|
|
||||||
self.assertEqual(compat_shlex_split('-val 中文'), ['-val', '中文'])
|
|
||||||
|
|
||||||
def test_compat_etree_Element(self):
|
|
||||||
try:
|
|
||||||
compat_etree_Element.items
|
|
||||||
except AttributeError:
|
|
||||||
self.fail('compat_etree_Element is not a type')
|
|
||||||
|
|
||||||
def test_compat_etree_fromstring(self):
|
|
||||||
xml = '''
|
|
||||||
<root foo="bar" spam="中文">
|
|
||||||
<normal>foo</normal>
|
|
||||||
<chinese>中文</chinese>
|
|
||||||
<foo><bar>spam</bar></foo>
|
|
||||||
</root>
|
|
||||||
'''
|
|
||||||
doc = compat_etree_fromstring(xml.encode('utf-8'))
|
|
||||||
self.assertTrue(isinstance(doc.attrib['foo'], compat_str))
|
|
||||||
self.assertTrue(isinstance(doc.attrib['spam'], compat_str))
|
|
||||||
self.assertTrue(isinstance(doc.find('normal').text, compat_str))
|
|
||||||
self.assertTrue(isinstance(doc.find('chinese').text, compat_str))
|
|
||||||
self.assertTrue(isinstance(doc.find('foo/bar').text, compat_str))
|
|
||||||
|
|
||||||
def test_compat_etree_fromstring_doctype(self):
|
|
||||||
xml = '''<?xml version="1.0"?>
|
|
||||||
<!DOCTYPE smil PUBLIC "-//W3C//DTD SMIL 2.0//EN" "http://www.w3.org/2001/SMIL20/SMIL20.dtd">
|
|
||||||
<smil xmlns="http://www.w3.org/2001/SMIL20/Language"></smil>'''
|
|
||||||
compat_etree_fromstring(xml)
|
|
||||||
|
|
||||||
def test_struct_unpack(self):
|
|
||||||
self.assertEqual(compat_struct_unpack('!B', b'\x00'), (0,))
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
|
@ -1,265 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from test.helper import (
|
|
||||||
assertGreaterEqual,
|
|
||||||
expect_warnings,
|
|
||||||
get_params,
|
|
||||||
gettestcases,
|
|
||||||
expect_info_dict,
|
|
||||||
try_rm,
|
|
||||||
report_warning,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
import hashlib
|
|
||||||
import io
|
|
||||||
import json
|
|
||||||
import socket
|
|
||||||
|
|
||||||
import youtube_dl.YoutubeDL
|
|
||||||
from youtube_dl.compat import (
|
|
||||||
compat_http_client,
|
|
||||||
compat_urllib_error,
|
|
||||||
compat_HTTPError,
|
|
||||||
)
|
|
||||||
from youtube_dl.utils import (
|
|
||||||
DownloadError,
|
|
||||||
ExtractorError,
|
|
||||||
format_bytes,
|
|
||||||
UnavailableVideoError,
|
|
||||||
)
|
|
||||||
from youtube_dl.extractor import get_info_extractor
|
|
||||||
|
|
||||||
RETRIES = 3
|
|
||||||
|
|
||||||
|
|
||||||
class YoutubeDL(youtube_dl.YoutubeDL):
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
self.to_stderr = self.to_screen
|
|
||||||
self.processed_info_dicts = []
|
|
||||||
super(YoutubeDL, self).__init__(*args, **kwargs)
|
|
||||||
|
|
||||||
def report_warning(self, message):
|
|
||||||
# Don't accept warnings during tests
|
|
||||||
raise ExtractorError(message)
|
|
||||||
|
|
||||||
def process_info(self, info_dict):
|
|
||||||
self.processed_info_dicts.append(info_dict)
|
|
||||||
return super(YoutubeDL, self).process_info(info_dict)
|
|
||||||
|
|
||||||
|
|
||||||
def _file_md5(fn):
|
|
||||||
with open(fn, 'rb') as f:
|
|
||||||
return hashlib.md5(f.read()).hexdigest()
|
|
||||||
|
|
||||||
|
|
||||||
defs = gettestcases()
|
|
||||||
|
|
||||||
|
|
||||||
class TestDownload(unittest.TestCase):
|
|
||||||
# Parallel testing in nosetests. See
|
|
||||||
# http://nose.readthedocs.org/en/latest/doc_tests/test_multiprocess/multiprocess.html
|
|
||||||
_multiprocess_shared_ = True
|
|
||||||
|
|
||||||
maxDiff = None
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
"""Identify each test with the `add_ie` attribute, if available."""
|
|
||||||
|
|
||||||
def strclass(cls):
|
|
||||||
"""From 2.7's unittest; 2.6 had _strclass so we can't import it."""
|
|
||||||
return '%s.%s' % (cls.__module__, cls.__name__)
|
|
||||||
|
|
||||||
add_ie = getattr(self, self._testMethodName).add_ie
|
|
||||||
return '%s (%s)%s:' % (self._testMethodName,
|
|
||||||
strclass(self.__class__),
|
|
||||||
' [%s]' % add_ie if add_ie else '')
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
self.defs = defs
|
|
||||||
|
|
||||||
# Dynamically generate tests
|
|
||||||
|
|
||||||
|
|
||||||
def generator(test_case, tname):
|
|
||||||
|
|
||||||
def test_template(self):
|
|
||||||
ie = youtube_dl.extractor.get_info_extractor(test_case['name'])()
|
|
||||||
other_ies = [get_info_extractor(ie_key)() for ie_key in test_case.get('add_ie', [])]
|
|
||||||
is_playlist = any(k.startswith('playlist') for k in test_case)
|
|
||||||
test_cases = test_case.get(
|
|
||||||
'playlist', [] if is_playlist else [test_case])
|
|
||||||
|
|
||||||
def print_skipping(reason):
|
|
||||||
print('Skipping %s: %s' % (test_case['name'], reason))
|
|
||||||
if not ie.working():
|
|
||||||
print_skipping('IE marked as not _WORKING')
|
|
||||||
return
|
|
||||||
|
|
||||||
for tc in test_cases:
|
|
||||||
info_dict = tc.get('info_dict', {})
|
|
||||||
if not (info_dict.get('id') and info_dict.get('ext')):
|
|
||||||
raise Exception('Test definition incorrect. The output file cannot be known. Are both \'id\' and \'ext\' keys present?')
|
|
||||||
|
|
||||||
if 'skip' in test_case:
|
|
||||||
print_skipping(test_case['skip'])
|
|
||||||
return
|
|
||||||
for other_ie in other_ies:
|
|
||||||
if not other_ie.working():
|
|
||||||
print_skipping('test depends on %sIE, marked as not WORKING' % other_ie.ie_key())
|
|
||||||
return
|
|
||||||
|
|
||||||
params = get_params(test_case.get('params', {}))
|
|
||||||
params['outtmpl'] = tname + '_' + params['outtmpl']
|
|
||||||
if is_playlist and 'playlist' not in test_case:
|
|
||||||
params.setdefault('extract_flat', 'in_playlist')
|
|
||||||
params.setdefault('skip_download', True)
|
|
||||||
|
|
||||||
ydl = YoutubeDL(params, auto_init=False)
|
|
||||||
ydl.add_default_info_extractors()
|
|
||||||
finished_hook_called = set()
|
|
||||||
|
|
||||||
def _hook(status):
|
|
||||||
if status['status'] == 'finished':
|
|
||||||
finished_hook_called.add(status['filename'])
|
|
||||||
ydl.add_progress_hook(_hook)
|
|
||||||
expect_warnings(ydl, test_case.get('expected_warnings', []))
|
|
||||||
|
|
||||||
def get_tc_filename(tc):
|
|
||||||
return ydl.prepare_filename(tc.get('info_dict', {}))
|
|
||||||
|
|
||||||
res_dict = None
|
|
||||||
|
|
||||||
def try_rm_tcs_files(tcs=None):
|
|
||||||
if tcs is None:
|
|
||||||
tcs = test_cases
|
|
||||||
for tc in tcs:
|
|
||||||
tc_filename = get_tc_filename(tc)
|
|
||||||
try_rm(tc_filename)
|
|
||||||
try_rm(tc_filename + '.part')
|
|
||||||
try_rm(os.path.splitext(tc_filename)[0] + '.info.json')
|
|
||||||
try_rm_tcs_files()
|
|
||||||
try:
|
|
||||||
try_num = 1
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
# We're not using .download here since that is just a shim
|
|
||||||
# for outside error handling, and returns the exit code
|
|
||||||
# instead of the result dict.
|
|
||||||
res_dict = ydl.extract_info(
|
|
||||||
test_case['url'],
|
|
||||||
force_generic_extractor=params.get('force_generic_extractor', False))
|
|
||||||
except (DownloadError, ExtractorError) as err:
|
|
||||||
# Check if the exception is not a network related one
|
|
||||||
if not err.exc_info[0] in (compat_urllib_error.URLError, socket.timeout, UnavailableVideoError, compat_http_client.BadStatusLine) or (err.exc_info[0] == compat_HTTPError and err.exc_info[1].code == 503):
|
|
||||||
raise
|
|
||||||
|
|
||||||
if try_num == RETRIES:
|
|
||||||
report_warning('%s failed due to network errors, skipping...' % tname)
|
|
||||||
return
|
|
||||||
|
|
||||||
print('Retrying: {0} failed tries\n\n##########\n\n'.format(try_num))
|
|
||||||
|
|
||||||
try_num += 1
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
|
|
||||||
if is_playlist:
|
|
||||||
self.assertTrue(res_dict['_type'] in ['playlist', 'multi_video'])
|
|
||||||
self.assertTrue('entries' in res_dict)
|
|
||||||
expect_info_dict(self, res_dict, test_case.get('info_dict', {}))
|
|
||||||
|
|
||||||
if 'playlist_mincount' in test_case:
|
|
||||||
assertGreaterEqual(
|
|
||||||
self,
|
|
||||||
len(res_dict['entries']),
|
|
||||||
test_case['playlist_mincount'],
|
|
||||||
'Expected at least %d in playlist %s, but got only %d' % (
|
|
||||||
test_case['playlist_mincount'], test_case['url'],
|
|
||||||
len(res_dict['entries'])))
|
|
||||||
if 'playlist_count' in test_case:
|
|
||||||
self.assertEqual(
|
|
||||||
len(res_dict['entries']),
|
|
||||||
test_case['playlist_count'],
|
|
||||||
'Expected %d entries in playlist %s, but got %d.' % (
|
|
||||||
test_case['playlist_count'],
|
|
||||||
test_case['url'],
|
|
||||||
len(res_dict['entries']),
|
|
||||||
))
|
|
||||||
if 'playlist_duration_sum' in test_case:
|
|
||||||
got_duration = sum(e['duration'] for e in res_dict['entries'])
|
|
||||||
self.assertEqual(
|
|
||||||
test_case['playlist_duration_sum'], got_duration)
|
|
||||||
|
|
||||||
# Generalize both playlists and single videos to unified format for
|
|
||||||
# simplicity
|
|
||||||
if 'entries' not in res_dict:
|
|
||||||
res_dict['entries'] = [res_dict]
|
|
||||||
|
|
||||||
for tc_num, tc in enumerate(test_cases):
|
|
||||||
tc_res_dict = res_dict['entries'][tc_num]
|
|
||||||
# First, check test cases' data against extracted data alone
|
|
||||||
expect_info_dict(self, tc_res_dict, tc.get('info_dict', {}))
|
|
||||||
# Now, check downloaded file consistency
|
|
||||||
tc_filename = get_tc_filename(tc)
|
|
||||||
if not test_case.get('params', {}).get('skip_download', False):
|
|
||||||
self.assertTrue(os.path.exists(tc_filename), msg='Missing file ' + tc_filename)
|
|
||||||
self.assertTrue(tc_filename in finished_hook_called)
|
|
||||||
expected_minsize = tc.get('file_minsize', 10000)
|
|
||||||
if expected_minsize is not None:
|
|
||||||
if params.get('test'):
|
|
||||||
expected_minsize = max(expected_minsize, 10000)
|
|
||||||
got_fsize = os.path.getsize(tc_filename)
|
|
||||||
assertGreaterEqual(
|
|
||||||
self, got_fsize, expected_minsize,
|
|
||||||
'Expected %s to be at least %s, but it\'s only %s ' %
|
|
||||||
(tc_filename, format_bytes(expected_minsize),
|
|
||||||
format_bytes(got_fsize)))
|
|
||||||
if 'md5' in tc:
|
|
||||||
md5_for_file = _file_md5(tc_filename)
|
|
||||||
self.assertEqual(tc['md5'], md5_for_file)
|
|
||||||
# Finally, check test cases' data again but this time against
|
|
||||||
# extracted data from info JSON file written during processing
|
|
||||||
info_json_fn = os.path.splitext(tc_filename)[0] + '.info.json'
|
|
||||||
self.assertTrue(
|
|
||||||
os.path.exists(info_json_fn),
|
|
||||||
'Missing info file %s' % info_json_fn)
|
|
||||||
with io.open(info_json_fn, encoding='utf-8') as infof:
|
|
||||||
info_dict = json.load(infof)
|
|
||||||
expect_info_dict(self, info_dict, tc.get('info_dict', {}))
|
|
||||||
finally:
|
|
||||||
try_rm_tcs_files()
|
|
||||||
if is_playlist and res_dict is not None and res_dict.get('entries'):
|
|
||||||
# Remove all other files that may have been extracted if the
|
|
||||||
# extractor returns full results even with extract_flat
|
|
||||||
res_tcs = [{'info_dict': e} for e in res_dict['entries']]
|
|
||||||
try_rm_tcs_files(res_tcs)
|
|
||||||
|
|
||||||
return test_template
|
|
||||||
|
|
||||||
|
|
||||||
# And add them to TestDownload
|
|
||||||
for n, test_case in enumerate(defs):
|
|
||||||
tname = 'test_' + str(test_case['name'])
|
|
||||||
i = 1
|
|
||||||
while hasattr(TestDownload, tname):
|
|
||||||
tname = 'test_%s_%d' % (test_case['name'], i)
|
|
||||||
i += 1
|
|
||||||
test_method = generator(test_case, tname)
|
|
||||||
test_method.__name__ = str(tname)
|
|
||||||
ie_list = test_case.get('add_ie')
|
|
||||||
test_method.add_ie = ie_list and ','.join(ie_list)
|
|
||||||
setattr(TestDownload, test_method.__name__, test_method)
|
|
||||||
del test_method
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
|
@ -1,115 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from test.helper import http_server_port, try_rm
|
|
||||||
from youtube_dl import YoutubeDL
|
|
||||||
from youtube_dl.compat import compat_http_server
|
|
||||||
from youtube_dl.downloader.http import HttpFD
|
|
||||||
from youtube_dl.utils import encodeFilename
|
|
||||||
import threading
|
|
||||||
|
|
||||||
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
|
|
||||||
|
|
||||||
|
|
||||||
TEST_SIZE = 10 * 1024
|
|
||||||
|
|
||||||
|
|
||||||
class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler):
|
|
||||||
def log_message(self, format, *args):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def send_content_range(self, total=None):
|
|
||||||
range_header = self.headers.get('Range')
|
|
||||||
start = end = None
|
|
||||||
if range_header:
|
|
||||||
mobj = re.search(r'^bytes=(\d+)-(\d+)', range_header)
|
|
||||||
if mobj:
|
|
||||||
start = int(mobj.group(1))
|
|
||||||
end = int(mobj.group(2))
|
|
||||||
valid_range = start is not None and end is not None
|
|
||||||
if valid_range:
|
|
||||||
content_range = 'bytes %d-%d' % (start, end)
|
|
||||||
if total:
|
|
||||||
content_range += '/%d' % total
|
|
||||||
self.send_header('Content-Range', content_range)
|
|
||||||
return (end - start + 1) if valid_range else total
|
|
||||||
|
|
||||||
def serve(self, range=True, content_length=True):
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header('Content-Type', 'video/mp4')
|
|
||||||
size = TEST_SIZE
|
|
||||||
if range:
|
|
||||||
size = self.send_content_range(TEST_SIZE)
|
|
||||||
if content_length:
|
|
||||||
self.send_header('Content-Length', size)
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write(b'#' * size)
|
|
||||||
|
|
||||||
def do_GET(self):
|
|
||||||
if self.path == '/regular':
|
|
||||||
self.serve()
|
|
||||||
elif self.path == '/no-content-length':
|
|
||||||
self.serve(content_length=False)
|
|
||||||
elif self.path == '/no-range':
|
|
||||||
self.serve(range=False)
|
|
||||||
elif self.path == '/no-range-no-content-length':
|
|
||||||
self.serve(range=False, content_length=False)
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
|
|
||||||
|
|
||||||
class FakeLogger(object):
|
|
||||||
def debug(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def warning(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def error(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class TestHttpFD(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.httpd = compat_http_server.HTTPServer(
|
|
||||||
('127.0.0.1', 0), HTTPTestRequestHandler)
|
|
||||||
self.port = http_server_port(self.httpd)
|
|
||||||
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
|
|
||||||
self.server_thread.daemon = True
|
|
||||||
self.server_thread.start()
|
|
||||||
|
|
||||||
def download(self, params, ep):
|
|
||||||
params['logger'] = FakeLogger()
|
|
||||||
ydl = YoutubeDL(params)
|
|
||||||
downloader = HttpFD(ydl, params)
|
|
||||||
filename = 'testfile.mp4'
|
|
||||||
try_rm(encodeFilename(filename))
|
|
||||||
self.assertTrue(downloader.real_download(filename, {
|
|
||||||
'url': 'http://127.0.0.1:%d/%s' % (self.port, ep),
|
|
||||||
}))
|
|
||||||
self.assertEqual(os.path.getsize(encodeFilename(filename)), TEST_SIZE)
|
|
||||||
try_rm(encodeFilename(filename))
|
|
||||||
|
|
||||||
def download_all(self, params):
|
|
||||||
for ep in ('regular', 'no-content-length', 'no-range', 'no-range-no-content-length'):
|
|
||||||
self.download(params, ep)
|
|
||||||
|
|
||||||
def test_regular(self):
|
|
||||||
self.download_all({})
|
|
||||||
|
|
||||||
def test_chunked(self):
|
|
||||||
self.download_all({
|
|
||||||
'http_chunk_size': 1000,
|
|
||||||
})
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
|
@ -1,44 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import unittest
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import subprocess
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from youtube_dl.utils import encodeArgument
|
|
||||||
|
|
||||||
rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
_DEV_NULL = subprocess.DEVNULL
|
|
||||||
except AttributeError:
|
|
||||||
_DEV_NULL = open(os.devnull, 'wb')
|
|
||||||
|
|
||||||
|
|
||||||
class TestExecution(unittest.TestCase):
|
|
||||||
def test_import(self):
|
|
||||||
subprocess.check_call([sys.executable, '-c', 'import youtube_dl'], cwd=rootDir)
|
|
||||||
|
|
||||||
def test_module_exec(self):
|
|
||||||
if sys.version_info >= (2, 7): # Python 2.6 doesn't support package execution
|
|
||||||
subprocess.check_call([sys.executable, '-m', 'youtube_dl', '--version'], cwd=rootDir, stdout=_DEV_NULL)
|
|
||||||
|
|
||||||
def test_main_exec(self):
|
|
||||||
subprocess.check_call([sys.executable, 'youtube_dl/__main__.py', '--version'], cwd=rootDir, stdout=_DEV_NULL)
|
|
||||||
|
|
||||||
def test_cmdline_umlauts(self):
|
|
||||||
p = subprocess.Popen(
|
|
||||||
[sys.executable, 'youtube_dl/__main__.py', encodeArgument('ä'), '--version'],
|
|
||||||
cwd=rootDir, stdout=_DEV_NULL, stderr=subprocess.PIPE)
|
|
||||||
_, stderr = p.communicate()
|
|
||||||
self.assertFalse(stderr)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
|
@ -1,166 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# coding: utf-8
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
# Allow direct execution
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import unittest
|
|
||||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
|
|
||||||
from test.helper import http_server_port
|
|
||||||
from youtube_dl import YoutubeDL
|
|
||||||
from youtube_dl.compat import compat_http_server, compat_urllib_request
|
|
||||||
import ssl
|
|
||||||
import threading
|
|
||||||
|
|
||||||
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
|
|
||||||
|
|
||||||
|
|
||||||
class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler):
|
|
||||||
def log_message(self, format, *args):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def do_GET(self):
|
|
||||||
if self.path == '/video.html':
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header('Content-Type', 'text/html; charset=utf-8')
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write(b'<html><video src="/vid.mp4" /></html>')
|
|
||||||
elif self.path == '/vid.mp4':
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header('Content-Type', 'video/mp4')
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write(b'\x00\x00\x00\x00\x20\x66\x74[video]')
|
|
||||||
elif self.path == '/302':
|
|
||||||
if sys.version_info[0] == 3:
|
|
||||||
# XXX: Python 3 http server does not allow non-ASCII header values
|
|
||||||
self.send_response(404)
|
|
||||||
self.end_headers()
|
|
||||||
return
|
|
||||||
|
|
||||||
new_url = 'http://127.0.0.1:%d/中文.html' % http_server_port(self.server)
|
|
||||||
self.send_response(302)
|
|
||||||
self.send_header(b'Location', new_url.encode('utf-8'))
|
|
||||||
self.end_headers()
|
|
||||||
elif self.path == '/%E4%B8%AD%E6%96%87.html':
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header('Content-Type', 'text/html; charset=utf-8')
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write(b'<html><video src="/vid.mp4" /></html>')
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
|
|
||||||
|
|
||||||
class FakeLogger(object):
|
|
||||||
def debug(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def warning(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def error(self, msg):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class TestHTTP(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.httpd = compat_http_server.HTTPServer(
|
|
||||||
('127.0.0.1', 0), HTTPTestRequestHandler)
|
|
||||||
self.port = http_server_port(self.httpd)
|
|
||||||
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
|
|
||||||
self.server_thread.daemon = True
|
|
||||||
self.server_thread.start()
|
|
||||||
|
|
||||||
def test_unicode_path_redirection(self):
|
|
||||||
# XXX: Python 3 http server does not allow non-ASCII header values
|
|
||||||
if sys.version_info[0] == 3:
|
|
||||||
return
|
|
||||||
|
|
||||||
ydl = YoutubeDL({'logger': FakeLogger()})
|
|
||||||
r = ydl.extract_info('http://127.0.0.1:%d/302' % self.port)
|
|
||||||
self.assertEqual(r['entries'][0]['url'], 'http://127.0.0.1:%d/vid.mp4' % self.port)
|
|
||||||
|
|
||||||
|
|
||||||
class TestHTTPS(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
certfn = os.path.join(TEST_DIR, 'testcert.pem')
|
|
||||||
self.httpd = compat_http_server.HTTPServer(
|
|
||||||
('127.0.0.1', 0), HTTPTestRequestHandler)
|
|
||||||
self.httpd.socket = ssl.wrap_socket(
|
|
||||||
self.httpd.socket, certfile=certfn, server_side=True)
|
|
||||||
self.port = http_server_port(self.httpd)
|
|
||||||
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
|
|
||||||
self.server_thread.daemon = True
|
|
||||||
self.server_thread.start()
|
|
||||||
|
|
||||||
def test_nocheckcertificate(self):
|
|
||||||
if sys.version_info >= (2, 7, 9): # No certificate checking anyways
|
|
||||||
ydl = YoutubeDL({'logger': FakeLogger()})
|
|
||||||
self.assertRaises(
|
|
||||||
Exception,
|
|
||||||
ydl.extract_info, 'https://127.0.0.1:%d/video.html' % self.port)
|
|
||||||
|
|
||||||
ydl = YoutubeDL({'logger': FakeLogger(), 'nocheckcertificate': True})
|
|
||||||
r = ydl.extract_info('https://127.0.0.1:%d/video.html' % self.port)
|
|
||||||
self.assertEqual(r['entries'][0]['url'], 'https://127.0.0.1:%d/vid.mp4' % self.port)
|
|
||||||
|
|
||||||
|
|
||||||
def _build_proxy_handler(name):
|
|
||||||
class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler):
|
|
||||||
proxy_name = name
|
|
||||||
|
|
||||||
def log_message(self, format, *args):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def do_GET(self):
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header('Content-Type', 'text/plain; charset=utf-8')
|
|
||||||
self.end_headers()
|
|
||||||
self.wfile.write('{self.proxy_name}: {self.path}'.format(self=self).encode('utf-8'))
|
|
||||||
return HTTPTestRequestHandler
|
|
||||||
|
|
||||||
|
|
||||||
class TestProxy(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.proxy = compat_http_server.HTTPServer(
|
|
||||||
('127.0.0.1', 0), _build_proxy_handler('normal'))
|
|
||||||
self.port = http_server_port(self.proxy)
|
|
||||||
self.proxy_thread = threading.Thread(target=self.proxy.serve_forever)
|
|
||||||
self.proxy_thread.daemon = True
|
|
||||||
self.proxy_thread.start()
|
|
||||||
|
|
||||||
self.geo_proxy = compat_http_server.HTTPServer(
|
|
||||||
('127.0.0.1', 0), _build_proxy_handler('geo'))
|
|
||||||
self.geo_port = http_server_port(self.geo_proxy)
|
|
||||||
self.geo_proxy_thread = threading.Thread(target=self.geo_proxy.serve_forever)
|
|
||||||
self.geo_proxy_thread.daemon = True
|
|
||||||
self.geo_proxy_thread.start()
|
|
||||||
|
|
||||||
def test_proxy(self):
|
|
||||||
geo_proxy = '127.0.0.1:{0}'.format(self.geo_port)
|
|
||||||
ydl = YoutubeDL({
|
|
||||||
'proxy': '127.0.0.1:{0}'.format(self.port),
|
|
||||||
'geo_verification_proxy': geo_proxy,
|
|
||||||
})
|
|
||||||
url = 'http://foo.com/bar'
|
|
||||||
response = ydl.urlopen(url).read().decode('utf-8')
|
|
||||||
self.assertEqual(response, 'normal: {0}'.format(url))
|
|
||||||
|
|
||||||
req = compat_urllib_request.Request(url)
|
|
||||||
req.add_header('Ytdl-request-proxy', geo_proxy)
|
|
||||||
response = ydl.urlopen(req).read().decode('utf-8')
|
|
||||||
self.assertEqual(response, 'geo: {0}'.format(url))
|
|
||||||
|
|
||||||
def test_proxy_with_idn(self):
|
|
||||||
ydl = YoutubeDL({
|
|
||||||
'proxy': '127.0.0.1:{0}'.format(self.port),
|
|
||||||
})
|
|
||||||
url = 'http://中文.tw/'
|
|
||||||
response = ydl.urlopen(url).read().decode('utf-8')
|
|
||||||
# b'xn--fiq228c' is '中文'.encode('idna')
|
|
||||||
self.assertEqual(response, 'normal: http://xn--fiq228c.tw/')
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue