Skip to content

Latest commit

 

History

History
174 lines (123 loc) · 6.58 KB

README.md

File metadata and controls

174 lines (123 loc) · 6.58 KB

penguicon-trax

Description

Penguicon Scheduling Web App

First, an event submission/feedback website. Later there will be a site to assign rooms and times to the events. These sites will interact but will be two different sites.

Installation

Basic prerequisites

$ sudo apt-get install git python python-dev libpq-dev python-pip curl
$ sudo pip install virtualenv

Clone the repository and create/activate a new virtualenv.

$ git clone https://github.com/MattArnold/penguicontrax
$ cd penguicontrax
$ virtualenv venv --distribute
$ source venv/bin/activate

Then, install the dependencies using pip:

$ pip install -r requirements.txt

Now you can run the app locally with:

$ python runserver.py

Optional: Deploy to Heroku

You will need to be added as a collaborator on the Heroku app to be able to push public changes.

Install the Heroku toolbelt

$ wget -qO- https://toolbelt.heroku.com/install-ubuntu.sh | sh

You will need to add your SSH key to the Heroku website. Copy the contents of ~/.ssh/id_rsa.pub to the SSH Keys section of https://dashboard.heroku.com/account. If you don't have an SSH key, you can generate one with:

$ ssh-keygen -t rsa

Log in to Heroku

$ heroku login

The mainline development Heroku app is gentle-tor-1515. Add Heroku as a remote to your repo.

$ heroku git:remote -a HEROKU_APP

Set the secret environment variables. After each equal sign should be the appropriate values.

$ heroku config:set SESSION_SECRET_KEY=
$ heroku config:set TWITTER_KEY=
$ heroku config:set TWITTER_SECRET_KEY=
$ heroku config:set FACEBOOK_APP_ID=
$ heroku config:set FACEBOOK_SECRET=
$ heroku config:set PUBLIC_URL=

To deploy, pull and push to Heroku

$ git pull heroku master
$ git push heroku master

If you've changed the database schema you will need to empty the databse and reset the web app.

$ heroku pg:reset DATABASE
$ heroku restart

Optional: Set up a redis cache to improve performance

With large data sets certain pages (like the main submission page) can take several SQL queries to generate (and thus the user may notice significant lag). The app will automatically cache the results of some of these queries if a redis server is connected. To attach a redis server, set REDISTOGO_URL in the environment variables of the server. If deploying to Heroku, a free redis server can be added by running

$ heroku addons:add redistogo

Optional: Use PostgreSQL instead of SQLite

penguicon-trax is deployed on Heroku, a cloud application platform. Heroku uses PostgreSQL as its database engine. By default, penguicon-trax uses SQLite as its database engine when running locally. For the most part this is all well and good but there are some subtle differences between the two engines. If you're having problems when deployed to Heroku, you can run a local PostgreSQL server and have penguicon-trax connect to it to better simulate the production environment.

First, install PostgreSQL:

$ sudo apt-get install postgresql postgresql-contrib

Set the password for the postgres database role

$ sudo -u postgres psql postgres
postgres=# \password postgres
Enter new password:
Enter it again:
postgres=# \q

Create a database for penguicon-trax

$ sudo -u postgres createdb penguicontrax

penguicon-trax relies on the DATABASE_URL environment variable to tell it what database engine to use. Heroku supplies this to the app; if it is empty the app switches to SQLite. We can write a script to provide the app with a DATABASE_URL that will point to our local PostgreSQL database. Create a file named psql_runserver.sh with the following contents:

#!/bin/bash
export DATABASE_URL=postgresql://postgres:<password>@localhost/penguicontrax
python runserver.py

You will need to replace with the PostgreSQL password you previously set. Now, make the script executable and run it to use penguicon-trax with PostgreSQL!

$ chmod +x psql_runserver.sh
$ ./psql_runserver.sh

Optional: Set up notification e-mails

penguicontrax will automatically e-mail submitters of events as their submissions move through the accept/reject process; for this to work you will need to configure an e-mail account on a SMTP server that the app can use. Set the following environment variables to their appropriate values.

$ heroku config:set MAIL_ENABLE=True
$ herkou config:set MAIL_SERVER=
$ herkou config:set MAIL_PORT=
$ herkou config:set MAIL_USE_TLS=
$ herkou config:set MAIL_USE_SSL=
$ herkou config:set MAIL_USERNAME=
$ herkou config:set MAIL_PASSWORD=
$ herkou config:set DEFAULT_MAIL_SENDER=
$ heroku config:set ORGANIZATION=
$ heroku config:set DEFAULT_MAIL_SENDER=
$ heroku config:set MAIL_REPLY_TO=

Optional: Set up auto scheduler

One feature of penguicontrax is an auto scheduler for conventions: the app will automatically schedule presenations in rooms so that presenters don't conflict, and it also will minimize the number of RSVP conflicts. This optimization is NP-Hard, and to solve it the app generates a linear programming model and then uses one of the freely available linear programming solvers such as clp, cbc, or glpk to solve it. Depending on the size of the convention this may a computationally intensive task. To make the scheduler faster, the app uses highly optimized C++ to create the model file. Also, the solvers themselves are native applications. Both must be built on the target machine.

The native code to be built has two portions: modeler, the C++ program that reads the app's SQL database and generates a .lp file, and the actual solver. modeler relies on soci for database access. Building soci requires cmake and sqlite3, and since we cannot be sure that these packages will be available on the target machine (they are not on Heroku, for example) we will need to build everything from source. However, this is accomplished easily. From the root of the project:

$ modeler/makemodeler.sh

The makemodeler.sh script will download the source for the prerequisite packages and build everything. Once the build is finished you can confirm that the modeler works by running the following script:

$ modeler/runmodeler.sh

If you see a message along the lines No database supplied then the build succeeded.