Difference between revisions of "Static:Crawling"

From 33C3_Public_Wiki
Jump to: navigation, search
m (1 revision)
(Extended schedules)
 
(5 intermediate revisions by 3 users not shown)
Line 5: Line 5:
 
== Downloading a Fahrplan-Dump ==
 
== Downloading a Fahrplan-Dump ==
  
You can find a file telling you the current version and the link to download the Fahrplan at [http://events.ccc.de{{SCRIPTPATH}}/../Fahrplan/version http://events.ccc.de{{SCRIPTPATH}}/../Fahrplan/version].
+
You can find a file telling you the current version and the link to download the Fahrplan at <!-- [http://fahrplan.events.ccc.de{{SCRIPTPATH}}/../Fahrplan/version http://events.ccc.de{{SCRIPTPATH}}/../Fahrplan/version] -->[https://fahrplan.events.ccc.de/congress/2016/Fahrplan/version https://fahrplan.events.ccc.de/congress/2016/Fahrplan/version].
 
While crawling the Fahrplan might provide incomplete data, this package always contains the raw data provided by the frab.
 
While crawling the Fahrplan might provide incomplete data, this package always contains the raw data provided by the frab.
  
Line 18: Line 18:
 
It might also be a good idea to full this file for apps using the JSON, XML or iCal format of the Fahrplan in oder to know, if anything might have changed. There won't be any changes without the file beeing updated as the structure itself is just a static export of the Fahrplan und the information within the file itself is used as a reference for a fresh download.
 
It might also be a good idea to full this file for apps using the JSON, XML or iCal format of the Fahrplan in oder to know, if anything might have changed. There won't be any changes without the file beeing updated as the structure itself is just a static export of the Fahrplan und the information within the file itself is used as a reference for a fresh download.
  
 +
As the the Fahrplan is hosted inside the congress network this year you might consider ''using the internal DNS servers'' or add the following line to your /etc/hosts file.
 +
  94.45.237.103    events.ccc.de
 +
 +
=== Github Mirror ===
 +
This year there is also a github mirror where you can get the Fahrplan. It should be as up to date as the dump on events.ccc.de as it's generate by the same scripts but using it will lighten the load on events.ccc.de You'll find the mirror at
 +
  https://github.com/nexus511/33c3-fahrplan
 +
 +
 +
== Extended schedules ==
 +
 +
Fahrplan of four main rooms + Sendezentrum + WIKI/self-organized Sessions in Rooms
 +
* [https://fahrplan.events.ccc.de/congress/2016/Fahrplan/everything.schedule.xml everything.schedule.xml]
 +
* [https://fahrplan.events.ccc.de/congress/2016/Fahrplan/everything.schedule.json everything.schedule.json]
 +
 +
Only WIKI/self-organized Sessions in Rooms
 +
* [https://fahrplan.events.ccc.de/congress/2016/Fahrplan/workshops.schedule.xml workshops.schedule.xml]
 +
* [https://fahrplan.events.ccc.de/congress/2016/Fahrplan/workshops.schedule.json workshops.schedule.json]
 +
 +
Merged raw dump from wiki data (event+session)
 +
* [https://fahrplan.events.ccc.de/congress/2016/Fahrplan/sessions_complete.json sessions_complete.json]
 +
 +
(updated about every 10 minutes via https://github.com/voc/schedule/blob/master/wiki2schedule_33C3.py )
 +
 +
<!--
 
== Downloading a Wiki-Dump ==
 
== Downloading a Wiki-Dump ==
  
Line 27: Line 51:
  
 
A full dump of the wiki is provided every three hours. You can download the file if the version has changed from the URL given in the last line of the file. The archive file contains the /wiki directory providing a structure of hashed files. It also contains a /wiki/index.php that hashes the URLs and opens the appropriate file and presents it to the browser. This should be easy to replace it by a python, ruby or whatever implementation and should even be easy to load to a database system.
 
A full dump of the wiki is provided every three hours. You can download the file if the version has changed from the URL given in the last line of the file. The archive file contains the /wiki directory providing a structure of hashed files. It also contains a /wiki/index.php that hashes the URLs and opens the appropriate file and presents it to the browser. This should be easy to replace it by a python, ruby or whatever implementation and should even be easy to load to a database system.
 +
-->

Latest revision as of 23:42, 25 December 2016

As within the last years the number of people deciding to crawl the wiki and the Fahrplan to provide mirrors has rapidly grown, there are now static dumps of the freely accessible content available for direct download. Crawling a Mediawiki is not that easy, at it provides a deep structure of elements where only a limited number of pages is really interesting to be crawled. While crawling the dynamic pages for editing and modifying pages causes a lot of traffic and cpu-usage on the host, the packages only contain the necessary information by also replacing dead links to link back to the Main Page.

All information is gathered directly via the HTTP server of the wiki and allows keeping the network traffic low. So, if you decide to provide a local mirror, please do not just crawl our wiki and the Fahrplan but just download the provided packages.

Downloading a Fahrplan-Dump

You can find a file telling you the current version and the link to download the Fahrplan at https://fahrplan.events.ccc.de/congress/2016/Fahrplan/version. While crawling the Fahrplan might provide incomplete data, this package always contains the raw data provided by the frab.

The content of the file looks like this:

VER: 2014-12-08 15:58 - en: Version 0.91b 2014: A Congress Odyssee
URL: http://events.ccc.de/congress/2015/Fahrplan/b5e0dab9-72ed-4295-bb0b-855c89efc01b.tar.gz

The first line beginning with "VER: " always tells you about the time of the last export, the locales of the export and the current version. The last line, always beginning with "URL: " tells you, where to download the dump of the Fahrplan as tar.gz and will provide a full URI.

This allows you to create a script to automatically dump the Fahrplan and extract it on a different location and also ensures that you always retrieve a clean version of the fahrplan while a crawl might happen during an update of the Fahrplan and might contain inconsistent data and broken links.

It might also be a good idea to full this file for apps using the JSON, XML or iCal format of the Fahrplan in oder to know, if anything might have changed. There won't be any changes without the file beeing updated as the structure itself is just a static export of the Fahrplan und the information within the file itself is used as a reference for a fresh download.

As the the Fahrplan is hosted inside the congress network this year you might consider using the internal DNS servers or add the following line to your /etc/hosts file.

 94.45.237.103    events.ccc.de

Github Mirror

This year there is also a github mirror where you can get the Fahrplan. It should be as up to date as the dump on events.ccc.de as it's generate by the same scripts but using it will lighten the load on events.ccc.de You'll find the mirror at

 https://github.com/nexus511/33c3-fahrplan


Extended schedules

Fahrplan of four main rooms + Sendezentrum + WIKI/self-organized Sessions in Rooms

Only WIKI/self-organized Sessions in Rooms

Merged raw dump from wiki data (event+session)

(updated about every 10 minutes via https://github.com/voc/schedule/blob/master/wiki2schedule_33C3.py )