From Cambridge University Caving Club - CUCC wiki
Jump to: navigation, search
(Getting a username and password)
(Big update to reflect move to mercurial)
Line 11: Line 11:
 
User user 'expo', password 'gosser' for access to the site.
 
User user 'expo', password 'gosser' for access to the site.
  
===Version control===
+
===The repositories===
No changes should be made directly to any files on the server. Instead, we use a version control system to allow collaborative editing and so that any changes can be rolled back if necessary. That system is mercurial (the command is 'hg'), which is a distributed VCS.  
+
All the expo data is contained in 3 'mercurial' repositories at expo.survex.com. This is currently hosted on Julian Todd's server. Mercurial is a distributed version control system which allows collaborative editing and keeps track of all changes so we can roll back and have branches if needed.
  
To edit the website, you need a mercurial client. If you are using windows, [http://tortoisesvn.tigris.org/|TortoiseSVN] is highly recommended. Once you've downloaded and installed it, the first step is to create what is called a checkout of the website or section of the website which you want to work on. This creates a copy on your machine which you can edit to your heart's content. The command to check out the entire expo website is
+
The site has been split into three parts:
svn co svn+ssh://cucc.survex.com/home/cucc/svn/trunk/expoweb
+
expoweb - the website itself, including generation scripts
 +
loser - the survex survey data
 +
tunneldata - the tunnel data and drawingss
  
In TortoiseSVN, merely right-click on a folder you want to check out to, choose "SVN checkout," and enter
+
All the scans, photos and videos have been removed for version-control and are just files. See below for details on that.
svn+ssh://cucc.survex.com/home/cucc/svn/trunk/expoweb
 
  
After you've made your changes, check them back in using
+
===How the website works===
svn ci
 
  
or right clicking on the folder and going to check in in TortoiseSVN.
+
Part of the website is static HTML, but quite a lot is generated by scripts. So anything you check in which affects cave data or descriptions won't appear on the site until the website update scripts are run. This happens automatically every 30 mins, but you can also kick off a manual update. See 'The expoweb-update script' below for details.
  
None of your changes will take effect, however, until you've run the expoweb-update script.
+
Also note that the website you see is its own mercurial checkout (just like your local one) so that has to be 'pulled' from the server before your changes are reflected.
 +
 
 +
===Quick start===
 +
 
 +
If you know what you are doing here is the basic info on what's where:
 +
 
 +
# expoweb on seagrass (The Website)
 +
hg [clone|pull|push] ssh://expo@seagrass.goatchurch.org.uk/expoweb
 +
 
 +
# loser on seagrass (The survey data)
 +
hg [clone|pull|push] ssh://expo@seagrass.goatchurch.org.uk/loser
 +
 
 +
# tunneldata on seagrass (The Tunnel drawings)
 +
hg [clone|pull|push] ssh://expo@seagrass.goatchurch.org.uk/tunneldata
 +
 
 +
Photos, scans (logbooks, drawn-up cave segments)
 +
(This is about 3.5GB of stuff which you probably don't actually need locally)
 +
To sync the files from seagrass to local expoimages directory:
 +
rsync -av expo@seagrass.goatchurch.org.uk:expoimages /home/expo/fromserver
 +
 
 +
To sync the local expoimage directory back to seagrass:
 +
rsync -av /home/expo/fromserver/expoimages expo@seagrass.goatchurch.org.uk:
 +
 
 +
(do be careful not to delete piles of stuff then rsync back - as it'll all get deleted on the server too!)
 +
 
 +
 
 +
===Editing the website===
 +
 
 +
To edit the website, you need a mercurial client. If you are using Windows, [http://tortoisehg.bitbucket.org/|TortoiseHg] is highly recommended. Lots of tools for Linux and mac exist too [http://mercurial.selenic.com/wiki/OtherTools ], both GUI and command-line. Once you've downloaded and installed a client, the first step is to create what is called a checkout of the website or section of the website which you want to work on. This creates a copy on your machine which you can edit to your heart's content. The command to check out the entire expo website is
 +
hg clone ssh://expo@seagrass.goatchurch.org.uk/expoweb
 +
 
 +
In TortoiseHg, merely right-click on a folder you want to check out to, choose "Mercurial checkout," and enter
 +
ssh://expo@seagrass.goatchurch.org.uk/expoweb
 +
 
 +
After you've made a change, commit it to you local copy with
 +
hg commit  (you can specify filenames to be specific)
 +
 +
or right clicking on the folder and going to commit in TortoiseSVN.
 +
 
 +
That has stored the changes in your local mercurial DVCS, but it has not sent anything back to the server. To do that you need to:
 +
hg push
 +
 
 +
If someone else is editing the same bit at the same time you may also need to 'hg merge'.
 +
 
 +
None of your changes will take effect, however, until the server checks out your changes and runs the expoweb-update script.
  
 
===The expoweb-update script===
 
===The expoweb-update script===
  
The script at the heart of the website update mechanism is a shell script at
+
The script at the heart of the website update mechanism is a makefile that runs the various generation scripts. It is run every half hour as a cron job, but if you want to check an update you can run it here:
  /cucc.survex.com/home/cucc/bin/expoweb-update
+
[Wooknote - this is not actually happening right now - FIXME!]
  
To run scripts on the server, you need to log in via SSH. The best way to do this in windows is to download [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY].
+
The scripts are generaly under the 'noinfo' section of the site just because that has some access control.  
  
 
===Updating cave pages===
 
===Updating cave pages===
Line 39: Line 83:
  
 
The first step is to check out, edit, and check in CAVETAB2.CSV, which is at
 
The first step is to check out, edit, and check in CAVETAB2.CSV, which is at
  /home/cucc/www/expo/noinfo/CAVETAB2.CSV
+
  /expoweb/noinfo/CAVETAB2.CSV
  
 
You need to be somewhat careful with the formatting; each cell needs to be only one line long (i.e. no newlines) or the script will get confused.
 
You need to be somewhat careful with the formatting; each cell needs to be only one line long (i.e. no newlines) or the script will get confused.
Line 49: Line 93:
 
Each year's expo has a documentation index which is in the folder  
 
Each year's expo has a documentation index which is in the folder  
  
  /home/cucc/www/expo/years
+
  /expoweb/years
  
, so to checkout the 2007 page, for example, you would use
+
, so to checkout the 2011 page, for example, you would use
  
  svn co svn+ssh://cucc.survex.com/home/cucc/svn/trunk/expoweb/years/2007
+
  hg clone ssh://expo@seagrass.goatchurch.org.uk/expoweb/years/2011
  
 
===Adding typed logbooks===
 
===Adding typed logbooks===
  
To be written.
+
Logbooks are typed up and put under the years/nnnn/ directory as 'logbook.txt'.
 +
 
 +
The formatting is largely freeform, but a bit of markup ('===' around header, bars separating date, <place> - <description>, and who) allows the troggle import script to read it correctly. The underlines show who wrote the entry. There is also a format for time-underground info so it can be automagically tabulated.
 +
 
 +
So the format should be  
 +
===2009-07-21|204 - Rigging entrance series| Becka Lawson, Emma Wilson, <u>Jess Stirrups</u>, Tony Rooke===
 +
 
 +
<Text of logbook entry>
 +
 
 +
T/U: Jess 1 hr, Emma 0.5 hr
 +
 
  
 
===Ticking off QMs===
 
===Ticking off QMs===
Line 65: Line 119:
  
 
===Maintaining the survey status table===
 
===Maintaining the survey status table===
At [http://cucc.survex.com/expo/surveys/surtabnam.html] there is a table which has a list of all the surveys and whether or not they have been drawn up, and some other info.  
+
At [http://expo.survex.com/surveys/surtabnam.html] there is a table which has a list of all the surveys and whether or not they have been drawn up, and some other info.  
  
This is generated by the script /home/cucc/www/tablizebyname-csv.pl from the input file /home/cucc/www/Surveys.csv
+
This is generated by the script tablizebyname-csv.pl from the input file Surveys.csv
  
 
===History===
 
===History===
Line 75: Line 129:
 
Martin Green added the SURVTAB.CSV file to contain tabulated data for many caves, and a script to generate the index pages from it. Dave Loeffler added scripts and programs to generate the prospecting maps. The server moved to Mark Shinwell's machine in the early 2000s, and the VCS was updated to subversion.
 
Martin Green added the SURVTAB.CSV file to contain tabulated data for many caves, and a script to generate the index pages from it. Dave Loeffler added scripts and programs to generate the prospecting maps. The server moved to Mark Shinwell's machine in the early 2000s, and the VCS was updated to subversion.
  
After expo 2009 the VCS was updated to hg, because a DVCS makes a great deal of sense for expo.  
+
After expo 2009 the VCS was updated to hg, because a DVCS makes a great deal of sense for expo (where it goes offline for a month or two and nearly all the year's edits happen).
 +
 
 
The site was moved to Julian Todd's seagrass server, but the change from 32-bit to 64-bit machines broke the website autogeneration code, which was only fixed in early 2011, allowing the move to complete. The data has been split into 3 separate repositories: the website, the survey data, the tunnel data.
 
The site was moved to Julian Todd's seagrass server, but the change from 32-bit to 64-bit machines broke the website autogeneration code, which was only fixed in early 2011, allowing the move to complete. The data has been split into 3 separate repositories: the website, the survey data, the tunnel data.
  

Revision as of 03:09, 24 April 2011

Editing the expo website is an adventure. Until now, there was no guide which explains the whole thing as a functioning system. Learning it by trial and error is non-trivial.

The website needs improvement, perhaps a complete overhaul. However, it is impossible to go about fixing it properly until we know how the whole thing works.

This manual is organized in a how-to sort of style. The categories, rather than referring to specific elements of the website, refer to processes that a maintainer would want to do.

How to update things on expo.survex.com

Getting a username and password

User user 'expo', password 'gosser' for access to the site.

The repositories

All the expo data is contained in 3 'mercurial' repositories at expo.survex.com. This is currently hosted on Julian Todd's server. Mercurial is a distributed version control system which allows collaborative editing and keeps track of all changes so we can roll back and have branches if needed.

The site has been split into three parts: expoweb - the website itself, including generation scripts loser - the survex survey data tunneldata - the tunnel data and drawingss

All the scans, photos and videos have been removed for version-control and are just files. See below for details on that.

How the website works

Part of the website is static HTML, but quite a lot is generated by scripts. So anything you check in which affects cave data or descriptions won't appear on the site until the website update scripts are run. This happens automatically every 30 mins, but you can also kick off a manual update. See 'The expoweb-update script' below for details.

Also note that the website you see is its own mercurial checkout (just like your local one) so that has to be 'pulled' from the server before your changes are reflected.

Quick start

If you know what you are doing here is the basic info on what's where:

  1. expoweb on seagrass (The Website)

hg [clone|pull|push] ssh://expo@seagrass.goatchurch.org.uk/expoweb

  1. loser on seagrass (The survey data)

hg [clone|pull|push] ssh://expo@seagrass.goatchurch.org.uk/loser

  1. tunneldata on seagrass (The Tunnel drawings)

hg [clone|pull|push] ssh://expo@seagrass.goatchurch.org.uk/tunneldata

Photos, scans (logbooks, drawn-up cave segments) (This is about 3.5GB of stuff which you probably don't actually need locally) To sync the files from seagrass to local expoimages directory: rsync -av expo@seagrass.goatchurch.org.uk:expoimages /home/expo/fromserver

To sync the local expoimage directory back to seagrass: rsync -av /home/expo/fromserver/expoimages expo@seagrass.goatchurch.org.uk:

(do be careful not to delete piles of stuff then rsync back - as it'll all get deleted on the server too!)


Editing the website

To edit the website, you need a mercurial client. If you are using Windows, [1] is highly recommended. Lots of tools for Linux and mac exist too [2], both GUI and command-line. Once you've downloaded and installed a client, the first step is to create what is called a checkout of the website or section of the website which you want to work on. This creates a copy on your machine which you can edit to your heart's content. The command to check out the entire expo website is

hg clone ssh://expo@seagrass.goatchurch.org.uk/expoweb

In TortoiseHg, merely right-click on a folder you want to check out to, choose "Mercurial checkout," and enter

ssh://expo@seagrass.goatchurch.org.uk/expoweb

After you've made a change, commit it to you local copy with

hg commit   (you can specify filenames to be specific)

or right clicking on the folder and going to commit in TortoiseSVN.

That has stored the changes in your local mercurial DVCS, but it has not sent anything back to the server. To do that you need to:

hg push

If someone else is editing the same bit at the same time you may also need to 'hg merge'.

None of your changes will take effect, however, until the server checks out your changes and runs the expoweb-update script.

The expoweb-update script

The script at the heart of the website update mechanism is a makefile that runs the various generation scripts. It is run every half hour as a cron job, but if you want to check an update you can run it here: [Wooknote - this is not actually happening right now - FIXME!]

The scripts are generaly under the 'noinfo' section of the site just because that has some access control.

Updating cave pages

Cave description pages are automatically generated from a comma separated values (CSV) table named CAVETAB2.CSV by a perl script called make-indxal4.pl . make-indxal4.pl is called automatically.

The first step is to check out, edit, and check in CAVETAB2.CSV, which is at

/expoweb/noinfo/CAVETAB2.CSV

You need to be somewhat careful with the formatting; each cell needs to be only one line long (i.e. no newlines) or the script will get confused.

And then run expoweb-update as above.

Updating expo year pages

Each year's expo has a documentation index which is in the folder

/expoweb/years

, so to checkout the 2011 page, for example, you would use

hg clone ssh://expo@seagrass.goatchurch.org.uk/expoweb/years/2011

Adding typed logbooks

Logbooks are typed up and put under the years/nnnn/ directory as 'logbook.txt'.

The formatting is largely freeform, but a bit of markup ('===' around header, bars separating date, <place> - <description>, and who) allows the troggle import script to read it correctly. The underlines show who wrote the entry. There is also a format for time-underground info so it can be automagically tabulated.

So the format should be

===2009-07-21|204 - Rigging entrance series| Becka Lawson, Emma Wilson, Jess Stirrups, Tony Rooke===

<Text of logbook entry>

T/U: Jess 1 hr, Emma 0.5 hr


Ticking off QMs

To be written.


Maintaining the survey status table

At [3] there is a table which has a list of all the surveys and whether or not they have been drawn up, and some other info.

This is generated by the script tablizebyname-csv.pl from the input file Surveys.csv

History

The CUCC Website was originally created by Andy Waddington in the early 1990s and was hosted by Wookey. The VCS was CVS. The whole site was just static HTML, carefully designed to be RISCOS-compatible (hence the 10-character filenames) as both Wadders and Wookey were RISCOS people then. Wadders wrote a huge amount of info collecting expo history, photos, cave data etc.

Martin Green added the SURVTAB.CSV file to contain tabulated data for many caves, and a script to generate the index pages from it. Dave Loeffler added scripts and programs to generate the prospecting maps. The server moved to Mark Shinwell's machine in the early 2000s, and the VCS was updated to subversion.

After expo 2009 the VCS was updated to hg, because a DVCS makes a great deal of sense for expo (where it goes offline for a month or two and nearly all the year's edits happen).

The site was moved to Julian Todd's seagrass server, but the change from 32-bit to 64-bit machines broke the website autogeneration code, which was only fixed in early 2011, allowing the move to complete. The data has been split into 3 separate repositories: the website, the survey data, the tunnel data.

Automation on cucc.survex.com/expo

The way things normally work, python or perl scripts turn CSV input into HTML for the website. Note that:

  • The CSV files are actually tab-separated, not comma-separated despite the extension.
  • The scripts can be very picky and editing the CSVs with microsoft excel has broken them in the past- not sure if this is still the case.
Overview of the automagical scripts on the expo website
Script location Input file Output file Purpose
/svn/trunk/expoweb/noinfo/make-indxal4.pl /svn/trunk/expoweb/noinfo/CAVETAB2.CSV many produces all cave description pages
/svn/trunk/expoweb/noinfo/make-folklist.py /svn/trunk/expoweb/noinfo/folk.csv http://cucc.survex.com/expo/folk/index.htm Table of all expo members

/svn/trunk/surveys/tablize-csv.pl /svn/trunk/surveys/tablizebyname-csv.pl

/svn/trunk/surveys/Surveys.csv

http://cucc.survex.com/expo/surveys/surveytable.html http://cucc.survex.com/expo/surveys/surtabnam.html

Survey status page: "wall of shame" to keep track of who still needs to draw which surveys
Prospecting guide


Website mysteries

The following are questions for people who know the expo website well, which stumped Aaron.

  • Why is there a /home/cucc/www/expo/surveys as well as a /home/cucc/www/surveys , and is there any difference?