All posts by martin

Quick and Dirty Twitter API in Python

QUICK DISCLAIMER: this is a quick and dirty solution to a problem, so may not represent best coding practice, and has absolutely no error checking or handling. Use with caution…

A recent project has needed me to scrape some data from Twitter. I considered using Tweepy, but as it was a project for the MSc in Computational Journalism, I thought it would be more interesting to write our own simple Twitter API wrapper in Python.

The code presented here will allow you to make any API request to Twitter that uses a GET request, so is really only useful for getting data from Twitter, not sending it to Twitter. It is also only for using with the REST API, not the streaming API, so if you’re looking for realtime monitoring, this is not the API wrapper you’re looking for. This API wrapper also uses a single user’s authentication (yours), so is not setup to allow other users to use Twitter through your application.

The first step is to get some access credentials from Twitter. Head over to https://apps.twitter.com/ and register a new application. Once the application is created, you’ll be able to access its details. Under ‘Keys and Access Tokens’ are four values we’re going to need for the API – the  Consumer Key and Consumer Secret, and the Access Token and Access Token Secret. Copy all four values into a new python file, and save it as ‘_credentials.py‘. The images below walk through the process. Also – don’t try and use the credentials from these images, this app has already been deleted so they won’t work!

Once we have the credentials, we can write some code to make some API requests!

First, we define a Twitter API object that will carry out our API requests. We need to store the API url, and some details to allow us to throttle our requests to Twitter to fit inside their rate limiting.

class Twitter_API:

 def __init__(self):

   # URL for accessing API
   scheme = "https://"
   api_url = "api.twitter.com"
   version = "1.1"

   self.api_base = scheme + api_url + "/" + version

   #
   # seconds between queries to each endpoint
   # queries in this project limited to 180 per 15 minutes
   query_interval = float(15 * 60)/(175)

   #
   # rate limiting timer
   self.__monitor = {'wait':query_interval,
     'earliest':None,
     'timer':None}

We add a rate limiting method that will make our API sleep if we are requesting things from Twitter too fast:

 #
 # rate_controller puts the thread to sleep 
 # if we're hitting the API too fast
 def __rate_controller(self, monitor_dict):

   # 
   # join the timer thread
   if monitor_dict['timer'] is not None:
   monitor_dict['timer'].join() 

   # sleep if necessary 
   while time.time() < monitor_dict['earliest']:
     time.sleep(monitor_dict['earliest'] - time.time())
 
   # work out then the next API call can be made
   earliest = time.time() + monitor_dict['wait']
   timer = threading.Timer( earliest-time.time(), lambda: None )
   monitor_dict['earliest'] = earliest
   monitor_dict['timer'] = timer
   monitor_dict['timer'].start()

The Twitter API requires us to supply authentication headers in the request. One of these headers is a signature, created by encoding details of the request. We can write a function that will take in all the details of the request (method, url, parameters) and create the signature:

 # 
 # make the signature for the API request
 def get_signature(self, method, url, params):
 
   # escape special characters in all parameter keys
   encoded_params = {}
   for k, v in params.items():
     encoded_k = urllib.parse.quote_plus(str(k))
     encoded_v = urllib.parse.quote_plus(str(v))
     encoded_params[encoded_k] = encoded_v 

   # sort the parameters alphabetically by key
   sorted_keys = sorted(encoded_params.keys())

   # create a string from the parameters
   signing_string = ""

   count = 0
   for key in sorted_keys:
     signing_string += key
     signing_string += "="
     signing_string += encoded_params[key]
     count += 1
     if count < len(sorted_keys):
       signing_string += "&"

   # construct the base string
   base_string = method.upper()
   base_string += "&"
   base_string += urllib.parse.quote_plus(url)
   base_string += "&"
   base_string += urllib.parse.quote_plus(signing_string)

   # construct the key
   signing_key = urllib.parse.quote_plus(client_secret) + "&" + urllib.parse.quote_plus(access_secret)

   # encrypt the base string with the key, and base64 encode the result
   hashed = hmac.new(signing_key.encode(), base_string.encode(), sha1)
   signature = base64.b64encode(hashed.digest())
   return signature.decode("utf-8")

Finally, we can write a method to actually make the API request:

 def query_get(self, endpoint, aspect, get_params={}):
 
   #
   # rate limiting
   self.__rate_controller(self.__monitor)

   # ensure we're dealing with strings as parameters
   str_param_data = {}
   for k, v in get_params.items():
     str_param_data[str(k)] = str(v)

   # construct the query url
   url = self.api_base + "/" + endpoint + "/" + aspect + ".json"
 
   # add the header parameters for authorisation
   header_parameters = {
     "oauth_consumer_key": client_id,
     "oauth_nonce": uuid.uuid4(),
     "oauth_signature_method": "HMAC-SHA1",
     "oauth_timestamp": time.time(),
     "oauth_token": access_token,
     "oauth_version": 1.0
   }

   # collect all the parameters together for creating the signature
   signing_parameters = {}
   for k, v in header_parameters.items():
     signing_parameters[k] = v
   for k, v in str_param_data.items():
     signing_parameters[k] = v

   # create the signature and add it to the header parameters
   header_parameters["oauth_signature"] = self.get_signature("GET", url, signing_parameters)

   # add the OAuth headers
   header_string = "OAuth "
   count = 0
   for k, v in header_parameters.items():
     header_string += urllib.parse.quote_plus(str(k))
     header_string += "=\""
     header_string += urllib.parse.quote_plus(str(v))
     header_string += "\""
     count += 1
     if count < 7:
       header_string += ", "

   headers = {
     "Authorization": header_string
   }

   # create the full url including parameters
   url = url + "?" + urllib.parse.urlencode(str_param_data)
   request = urllib.request.Request(url, headers=headers)

   # make the API request
   try:
     response = urllib.request.urlopen(request)
     except urllib.error.HTTPError as e:
     print(e)
   raise e
     except urllib.error.URLError as e:
     print(e)
     raise e

   # read the response and return the json
   raw_data = response.read().decode("utf-8")
   return json.loads(raw_data)

Putting this all together, we have a simple Python class that acts as an API wrapper for GET requests to the Twitter REST API, including the signing and authentication of those requests. Using it is as simple as:

ta = Twitter_API()

# retrieve tweets for a user
params = {
   "screen_name": "martinjc",
}

user_tweets = ta.query_get("statuses", "user_timeline", params)

As always, the full code is online on Github, in both my personal account and the account for the MSc Computational Journalism.

 

 

 

 

 

 

 

 

 

Screenshot 2014-11-04 12.43.42

How do people decide whether or not to read a tweet?

It turns out that an existing relationship with the author of the tweet is one of the main factors influencing how someone decides whether or not to read a tweet. At the same time,  a large number associated with a tweet can also make the tweet more attractive to readers.

Our latest Open Access research has discovered how much effect the information about a tweet has on whether people decide to read it or not.

By showing hundreds of Twitter users the information about two tweets but not the tweets themselves, and then asking the users which tweet they would like to read, we have been able to look at which information is more important when users are deciding to read a tweet.

We looked at two different types of information:

  1. Simple numbers that describe the tweet, such as the number of retweets it has, or numbers that describe the author, such as how many followers they have, or how many tweets they’ve written.
  2. Whether a relationship between the reader and the author is important, and whether that relationship was best shown through subtle hints, or direct information.

When readers can see only one piece of information, the case is clear: they’d rather read the tweet written by someone they are following. Readers can easily recognise the usernames, names, and profile images of people they already follow, and are likely to choose to read content written by someone they follow (instead of content written by a stranger) around 75% of the time. If all they can see is a piece of numerical information, they would rather read the tweet with the highest number, no matter what that number is. The effect is strongest with the number of retweets, followed by the number of followers, but even for the number of following and number of tweets written the effect is significant.

When readers can see two pieces of information, one about their relationship with the author, and one numerical, there are two cases to look at. When the author they follow also has a high numerical value, readers will choose that tweet in around 80% of the cases. When the author they already follow has a lower numerical value, it is still the existing relationship that is more of a draw. Readers would rather read a tweet from someone they know that has a low number of retweets, than one from a stranger with a high number of retweets.

This work offers an understanding of how the decision-making process works on Twitter when users are skimming their timelines for something to read, and has particular implications for the display and promotion of non-timeline content within content streams. For instance, readers may pay more attention to adverts and promoted content if the link between themselves and the author is highlighted.

Previous results  from an early experiment were published at SocialCom. The results in this new paper are from a modified and expanded version of this earlier experiment.

 

WHAT HAVE I DONE?

Beards, ‘Taches and Testicles

This is me:

mildly hungover morning selfie
mildly hungover morning selfie

Obviously the first thing you notice, after my devilishly handsome good looks, is that I have around the lower half of my face what has the potential to be described as,  if one is kind: a ‘beard’. It is patchy, it is more than often unkempt, and it is quite ginger, but it is somewhat beard like. I can no longer remember when I grew this beard, but I like it. I like it so much that I refused to shave it off when I graduated in 2013, and again when I got married earlier this year.

However, ominous things have happened. Recently, a mate and colleague done a tweet:

“Good on Pete” I thought. Good cause. I did Movember back in 2011, and it was hard, because quite frankly with a moustache I look like a complete tit. At the time I was doing it, I think Pete and I were sharing an office, so he knows how much of a tit you can look like during Movember, yet he’s chosen to do it anyway. Well done.

Of course, you won’t catch me doing it. I have a beard now, and I won’t shave that off. Also, as I mentioned, I look like a complete tit when I grow a moustache. It was fine in 2011, I was only an RA, so I could just hide in the office and work. The only person affected was my wife, who sadly had to be seen in public with me. I’m a lecturer now. I can’t just hide in my office. I have to teach. I have to stand up in front of students. I can’t do that looking like a person who belongs on some sort of list.

Then Vince Knight joined Pete’s team:

“Well done Vince” I thought. Good cause. At least Pete won’t look so daft walking around campus with a ‘tache now. There’ll be two of you at least. Not me of course. No way.

Then Pete done another tweet:

Oh.

Pete’s called me out. He wants me to join in. Maybe we’ll just all ignore him and it’ll go away.

Then I done a tweet:

WTF? What did I just do? Did I agree to do Movember again? Why? I have no idea. Perhaps I enjoy looking like a tit?

So. I joined. As did many others that Pete called out. And now we’re all going to grow moustaches and demand money from our friends, relatives and colleagues. It’s a good cause. You can donate to us, our team page is here.

First though, there’s business to take care of. The beard had to go. I had to locate my shaving equipment, which has not been used in many years, and attempt to remove the lovely facial hair to which I have become so attached, without slicing my face apart in the process:

WHAT HAVE I DONE?
WHAT HAVE I DONE?

So that’s it. The beard is off and I am clean-shaven for the first time in I don’t know how long. This, I think, is quite the sacrifice. But there is more to come. The ‘tache is on its way – slowly working its way out of my upper lip. I am going to look terrible. If you in any way feel inclined, please make it worth it. Donate to me or the team. Don’t let my beard have fallen in vain.  After all (I came up with this last night while very drunk and I LOVE IT):  beards grow back. Balls don’t.

Computational Journalism Manifesto

Computational Journalism – ‘a Manifesto’

While Glyn and I have been discussing the new MSc course between ourselves and with others, we have repeatedly come up with the same issues and themes, again and again. As a planning exercise earlier in the summer, we gathered some of these together into a ‘manifesto’.

The manifesto is online on our main ‘Computational Journalism‘ website with a bit of extra commentary, but I thought I’d upload it here as well. Any comments should probably be directed to the article on the CompJ site, so I’ve turned them off just for this article.

 

GeoJSON and topoJSON for UK boundaries

I’ve just put an archive online containing GeoJSON and topoJSON for UK boundary data. It’s all stored on Github, with a viewer and download site hosted on Github pages.

Browser for the UK topoJSON stored in the Github repository
Browser for the UK topoJSON stored in the Github repository

The data is all created from shapefiles released by the Office of National Statistics, Ordnance Survey and National Records Scotland, all under the Open Government and OS OpenData licences.

In later posts I’ll detail how I created the files, and how to use them to create interactive choropleth maps.

CCG to WPC lookup

CCGs and WPCs via the medium of OAs

As I was eating lunch this afternoon, I spotted a conversation between @JoeReddington and @MySociety whizz past in Tweetdeck. I traced the conversation back to the beginning and found this request for data:

I’ve been doing a lot of playing with geographic data recently while preparing to release a site making it easier to get GeoJSON boundaries of various areas in the UK. As a result, I’ve become pretty familiar with the Office of National Statistics Geography portal, and the data available there. I figured it must be pretty simple to hack something together to provide the data Joseph was looking for, so I took a few minutes out of lunch to see if I could help.

Checking the lookup tables at the ONS, it was clear that unfortunately there was no simple ‘NHS Trust to Parliamentary Constituency’ lookup table. However, there were two separate lookups involving Output Areas (OAs). One allows you to lookup which Parliamentary Constituency (WPC) an OA belongs to. The other allows you to lookup which NHS Clinical Commissioning Group (CCG) an OA belongs to. Clearly, all that’s required to link the two together is a bit of quick scripting to tie them both together via the Output Areas.

First, let’s create a dictionary with an entry for each CCG. For each CCG we’ll store it’s ID, name, and a set of OAs contained within. We’ll also add  an empty set for the WPCs contained within the CCG:

import csv
from collections import defaultdict

data = {}

# extract information about clinical commissioning groups
with open('OA11_CCG13_NHSAT_NHSCR_EN_LU.csv', 'r') as oa_to_cgc_file:
  reader = csv.DictReader(oa_to_cgc_file)
  for row in reader:
    if not data.get(row['CCG13CD']):
      data[row['CCG13CD']] = {'CCG13CD': row['CCG13CD'], 'CCG13NM': row['CCG13NM'], 'PCON11CD list': set(), 'PCON11NM list': set(), 'OA11CD list': set(),}
    data[row['CCG13CD']]['OA11CD list'].add(row['OA11CD'])

Next we create a lookup table that allows us to convert from OA to WPC:

# extract information for output area to constituency lookup
oas = {}
pcon_nm = {}

with open('OA11_PCON11_EER11_EW_LU.csv', 'r') as oa_to_pcon_file:
  reader = csv.DictReader(oa_to_pcon_file)
  for row in reader:
    oas[row['OA11CD']] = row['PCON11CD']
    pcon_nm[row['PCON11CD']] = row['PCON11NM']

As the almost last step we go through the CCGs, and for each one we go through the list of OAs it covers, and lookup the WPC each OA belongs to:

# go through all the ccgs and lookup pcons from oas
for ccg, d in data.iteritems():

 for oa in d['OA11CD list']:
   d['PCON11CD list'].add(oas[oa])
   d['PCON11NM list'].add(pcon_nm[oas[oa]])
 
del d['OA11CD list']

Finally we just need to output the data:

for d in data.values():

 d['PCON11CD list'] = ';'.join(d['PCON11CD list'])
 d['PCON11NM list'] = ';'.join(d['PCON11NM list'])

with open('output.csv', 'w') as out_file:
  writer = csv.DictWriter(out_file, ['CCG13CD', 'CCG13NM', 'PCON11CD list', 'PCON11NM list'])
  writer.writeheader()
  writer.writerows(data.values())

Run the script, and we get a nice CSV with one row for each CCG, each row containing a list of the WPC ids and names the CCG covers.

Of course, this data only covers England (as CCGs are a division in NHS England). Although there don’t seem to be lookups for OAs to Health Boards in Scotland, or from OAs to Local Health Boards in Wales, it should still be possible to do something similar for these countries using Parliamentary Wards as the intermediate geography, as lookups for Wards to Health Boards and Local Health Boards are available. It’s also not immediately clear how well the boundaries for CCGs and WPCs match up, that would require further investigation, depending on what the lookup is to be used for.

All the code, input and output for this task is available on my github page.

View over Cardiff Bay from Northcliffe

sitting on the dock of the bay

While we negotiate the transition from the old house which we’ve sold to the new house we’ve just bought we’ve been renting a lovely flat up on Penarth head. One of the main benefits of this flat is the glorious view over Cardiff Bay and to the city centre beyond. No matter what time it is, whenever I pass by the living room window I end up staring out across the city. During the day, there’s boats coming and going through the barrage locks, or into the docks proper. At night the city is lit up with a terrible orange urban glow that somehow looks both peaceful and exciting. I’ve spent a lot of time just stood on the balcony watching, and it’s been quite relaxing. Not only that, but I’ve had the opportunity to see some fairly interesting occurrences; especially when there’s been an unusual visitor to Cardiff docks, such as this tall ship we had visiting earlier in the year:

The Stavros S Niarchos leaving Cardiff Docks
The Stavros S Niarchos leaving Cardiff Docks

This was the case again this evening, when we were able to stand and watch the warships of various flags and types leaving Cardiff docks after the conclusion of the NATO summit in Newport. Leaving aside any particular feelings about militarisation, it is still genuinely interesting to see these things in your home city, even more so when you’ve got a good view.

Unfortunately despite still being up on the hill overlooking the city, the new house does not have such a commanding view of the docks, bay, or Cardiff. Losing that is one of the worst things about having to move. I guess I’ll just have to get used to putting my shoes on and leaving the house whenever I want to stare out over the bay…

View over Cardiff Bay from Northcliffe
View over Cardiff Bay from Northcliffe

 

The Graphical Web 2014

photo of the author outside Winchester cathedral
(Grumpy) Winchester Cathedral Selfie

Last week I had a lovely time down in Winchester with m’colleague, attending The Graphical Web 2014. This year the theme was ‘Visual Storytelling’, so I’d gone along to see what new things we could learn about visualisation to include in the MSc in Computational Journalism. We’d also already had a few conversations about the course with people who were going to be at the conference, so we were planning to take the opportunity to chat in person about their involvement.

There were many excellent informative and entertaining talks, ranging from the process behind the redesign of Google Maps, through how Twitter does data visualisation, and on to what happens when your data visualisation becomes immensely popular. I’d highly recommend anyone with an interest in any of this to take some time to look through the schedule and watch the videos of some of the talks – I’ll certainly be forcing the MScCompJ students to watch a few.

Scott Murray educates us on the best design process
Scott Murray educates us on the best design process

There were some interesting messages from people at the conference that I’ll be taking forward with my own work and trying to impart to the students. One that is key, I think, is to strike the right balance between detail and simplicity when presenting data. This was mentioned several times throughout the conference, but it really is important. Too much information in your visualisation and you can alienate the reader and confuse or hide your message. Not enough information and the context is lost, and the use of the design to the more advanced reader is reduced. It’s one of those balancing acts that we find so often when trying to mix both people and computers. Attempting to solve this problem and find this balance is challenging and interesting, and I look forward to seeing how the students next year cope with it.

Overall, it was a really good conference. I met a number of interesting people,  found a whole set of new people to follow on Twitter, and returned to Cardiff excited about the year ahead.

 

Unified Diff and recruiting Guest Lecturers

Last week I gave a quick lightning talk at UnifiedDiff – a local tech meetup here in Cardiff. The main point of the talk was to try and recruit more industry involvement for our new MSc in Computational Journalism – preferably by getting some web developers and software engineers in to give guest lectures on the tools, languages and processes they use.

The talk went well, and I’ve had several offers from people wanting to get involved and add some real value to the course, which is brilliant. Of course, there’s always room to add more, so if you’re interested in coming and talking to our students, get in touch!

If you’re interested, the slides from the talk are here

 

Us, family and friends at our wedding

Wedding!

On the 25th May 2014, after many years of procrastination and denials that we would ever get married, Lisa and I finally tied the knot. It was a superb day, plenty of fun was had by all, and I thought it was worth writing a little bit about some of the companies, suppliers and individuals who helped make it so good. If you’re planning a wedding in Shrewsbury or Shropshire, you can do far worse than to ask these guys for their help:

Venues:

Shrewsbury Castle
Shrewsbury Castle

The ceremony venue was in what is almost the ‘default’ venue for civil ceremonies in Shrewsbury: Shrewsbury Castle. You can’t use the word ‘fantastic’ enough when you’re talking about the castle – it really is a great venue. We got married there, my mother got married there recently, and our friends got married there before her. It’s limited in terms of the numbers you can have in the ceremony itself, but that really didn’t bother us, and so the castle was the obvious choice. Ian (the castle custodian) was really helpful, from our first contact onwards. He was always available to answer questions, and on the wedding day he worked really hard to help everything go smoothly. Such a lovely, friendly chap, we knew we could relax about the ceremony as he had everything under control.

Drapers Hall, Shrewsbury
Drapers Hall

The reception venue was Drapers Hall, a great restaurant just around the corner from Shrewsbury Castle. When we visited home in 2013 to look for venues, I thought this was kind of a strange option – it wasn’t anything like the more traditional hotels, country houses and renovated barns that we were looking at. In the end, I think it was that fact that made it perfect. We spoke to Nigel, who runs the restaurant, about our wedding, and from the moment he started talking about food, describing the kind of thing he could make for the wedding breakfast, and then for the evening reception, we were sold. Because he wasn’t a hotel trying to sell us a ‘package’ deal, we were able to tailor everything to our own tastes and needs. The venue itself is amazing; a great old Shrewsbury building, full of history, and decorated with a mix of old and new that works so well.

Dessert - chocolate fondant
Inaccurate Dessert!

The food was glorious – all our guests got a choice of starters and main courses, and they all tasted and looked amazing. The staff were great, very helpful, always on hand without being overbearing, and nothing was ever too much trouble, all evening. We had a dedicated contact all day, who introduced himself to Lisa as she was wandering the halls the evening before the wedding (unaccustomed to having nothing to do) by saying “Hi, I’m Tim. I’ll be your guy for the wedding day”. He totally was our guy; whenever we needed something, he was there to help. He kept us up to date with plans and timing, and managed the whole event to help it go off smoothly.  The rest of the staff were also consistently brilliant, doing everything from re-making a playlist on spotify at short notice, to dealing with some idiot (me) knocking over the celebration cake in the middle of the evening party. The rooms in the hotel are glorious, and there are only six of them, so you can restrict who you invite to stay over! I was so pleased with the whole event, it was really good fun.

Flowers:

Bouquet
Bouquet

This was mainly Lisa’s domain, for obvious reasons (I have very little clue about what is ‘good’ flowers), and she went with florists in the centre of Shrewsbury called Lipstick & Gin. Again, fantastic service, lovely people, and the flowers they made were not only beautiful and fabulously scented as requested, but they were reasonably priced too. Lorraine the florist even advised on using smaller (and cheaper!) bouquets as the bride was so tiny. They happily delivered the flowers to the castle and to Drapers the day before the wedding, and we had no problems or complaints with any of them. A top choice.

Dress:

Wedding Dress
Dress!

Again, not really my domain, but after visiting every wedding dress shop in the world in Cardiff and Shrewsbury, and conducting 3 months research into what did and didn’t suit, Lisa went with Hayley J. Obviously I can’t really comment on the process, having not been involved, and I haven’t even met Hayley herself, but Lisa assures me that the whole thing was done extremely well. The story she tells is that she had sort of decided what she wanted, but couldn’t find anything that ticked all of the boxes, or anything that fit properly. She was only visiting Hayley on the off chance, expecting that a custom made dress would be far too expensive. Within about thirty seconds of meeting her, Hayley had described exactly what Lisa wanted without even asking what she was looking for in a dress, and had quoted a more reasonable price than any of the ‘off the peg’ dresses Lisa had been considering. A couple of fitting sessions later, and Lisa had the most lovely wedding dress ever. I think she looked amazing in it, and I know she was delighted.

Suit:

Suit
Typical, pulling a face in the best photo of the suit…

Originally, I was going to hire suits for myself and the groomsmen. However, I wanted to wear an everyday three-piece suit for the wedding, I wasn’t really interested in going with a morning suit as might be more traditional. Then we actually looked into hiring normal style suits, and they were pretty awful. Plus, hiring suits is expensive. So, with only a couple of months to go before the wedding, I told my groomsmen and family they were on their own, and I wasn’t hiring them suits. Instead, I was going to take the suit budget and blow it all on me, getting a suit tailor made. I checked a couple of places online, but then went with Martin David. I turned up there expecting to be told there was no way that they could make a suit in time, but actually they said they could do it in seven weeks. Again, the guys there were friendly and helpful, they explained all the choices and decisions well, and I felt totally happy all the way through the process. Plus, it was nowhere near as expensive as I thought it would be. I ended up with a really classic looking three piece suit, that will last me well for years.

Bridesmaids Dresses

Bridesmaids
Bridesmaids!

Again, a Lisa thing. I think that, as with her own dress, she tried every shop in the world for Bridesmaids dresses, before deciding on ordering some through Wedding World in Shrewsbury. As with all the people we met while organising our wedding, they were extremely friendly, knowledgeable and helpful. The dresses arrived on time, and they even dealt with a couple of small manufacturing problems quickly and efficiently.

Wedding Car:

Car and Driver
Car! (Also Driver!)

I’m pretty sure that Frank Painter & Sons is probably the best choice for hiring a wedding car in Shrewsbury, but we hired them because of a pun. I mean, their cars are ace, and their prices are reasonable, but it was the pun that sold it to us. While out touring wedding venues, we bumped into a couple of their drivers who were sat outside a hotel while the ceremony happened inside. We mentioned we were getting married, and asked if they had a card we could take away so we could call them later. At some point, we mentioned that we were only looking for one car, for Lisa, and one of the drivers said “well, you could always have one each. You know, his and hearse!” It was hilarious. As long as you know that Frank Painter & Sons is also a funeral directors. Which you didn’t. But you can see why we hired them, eh?

Band:

Band
Band!

Oh the music. We agonised for ages about the music. Then at almost the last minute, we booked the Hot Jazz Biscuits, and I am so glad we did. They’re actually run by an old school friend, although I didn’t realise it at the time. We’d heard some good things about them, and their videos online looked pretty good, so not really knowing what we wanted, we went with them. They were awesome. They turned up on time, were set up and ready to go when the party needed to start, and they played wonderful music all evening. We didn’t really have a plan on how many sets we wanted, or when they should play, so they just took care of it themselves. We didn’t even have a first dance planned (we just couldn’t decide) so we told them to play something and we’d dance to it! That’s how we ended up with Van Morrison’s Moondance as our first dance. The music was good, and we danced all night, which it turns out is exactly what I was looking for. They even did a few encores for us, even though they’d played for longer than we’d paid them for. A superb band that kept the party going all night.

Photographers:

This is a bit irrelevant in a post about a Shrewsbury wedding, as we didn’t use photographers from Shrewsbury, or even Shropshire, but they were ace so they deserve a mention too. We used the lovely Caroline and Ian from weheartwedding, based in Cardiff. Lisa met them at a wedding fair, they were nice, and they were happy to travel as long as we paid for fuel. Professional and lovely all the way through the day, I can’t recommend them enough. We got the photos through just a few days ago, and they’re all brilliant (most of the photos in this post came from them). Choosing which ones we want to print out and display is going to be a really hard task indeed…

 

Lisa and Martin Wedding
Wedding!

MSc Computational Journalism about to launch

For the last two years I’ve been working on a project with some colleagues in the school of Journalism, Media and Cultural Studies (JOMEC) here at Cardiff University and it’s finally all coming together. This week we’ve been able to announce that (subject to some final internal paperwork wrangling) we’ll be launching an MSc in Computational Journalism this September. The story of how the course came about is fairly long, but starts simply with a tweet (unfortunately missing the context, but you get the drift):

An offer via social media from someone I’d never met, asking to pick my brains  about an unknown topic. Of course, I jumped at the invite:

That ‘brain picking’ became an interesting chat over coffee in one of the excellent coffee shops in Cardiff, where Glyn and I discussed many things of interest, and many potential areas for collaboration – including the increased use of data and coding within modern journalism. At one point during this chat, m’colleague Glyn said something like “do you know, I think we should run a masters course on this.” I replied with something along the lines of “yes, I think that’s a very good idea.” That short conversation became us taking the idea of a MSc in Computational Journalism to our respective heads of schools, which became us sat around the table discussing what should be in such a course, which then became us (I say us, it was mainly all Richard) writing pages of documentation explaining what the course would be and arguing the case for it to the University.  Last week we held the final approval panel for the course, where both internal and external panel members all agreed that we pretty much knew what we were doing, that the course was a good idea and had the right content, and that we should go ahead and launch it. From 25th July 2012 to 1st April 2014 is a long time to get an MSc up and running, but we’ve finally done it. Over that time I’ve discovered many things about the University and its processes, drunk many pints of fine ale as we try to hammer out a course structure in various pubs around the city, and have come close on at least one occasion to screaming at a table full of people, but now it’s done. As I write, draft press releases are being written, budgets are being sorted, and details are being uploaded to coursefinder. With any luck, September will see us with a batch of students ready and willing to step onto the course for the first time. It’s exciting, and I can’t wait.