Personality and Places

January 27, 2015 - by martin

Our paper examining the link between individual personality and the places people visit has just been published in Computers in Human Behavior. It’s open access, so you can go read it for free, now!

In an experiment we ran previously, we asked users of Foursquare to take a personality test and give us access to their checkin history. The personality test gives us a measure of how each person scores for five different factors: Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism. The checkin history lists all the places they’ve ever checked in to using Foursquare. Because a couple of hundred people took part in the experiment, we ended up with a large number of individual personalities that we could link to over a hundred thousand venues. In total, this represents a pretty staggering half a million Foursquare checkins that we have personality data associated with.

Our first step with this data has been to see if there are any links between personality factors and the places people choose to visit, and we found some interesting connections.

One of our main finding shows that the use of Foursquare for recording checkins seems to correlate well with Conscientiousness. The more conscientious a user is, the more likely they are to have checked in at more places and to have visited more venues. This could be because people with a high Conscientiousness score tend to be quite organised and disciplined, and so are more likely to remember to check in at every place they visit.

The opposite is true for Neuroticism: the more neurotic an individual is, the fewer places they have visited. Neuroticism is associated with negative feelings, and a tendency to be less social, which could then translate into people going to fewer places, and so checking in less. This is expressed again when we look at only those venues classed as ‘social’ (i.e. - somewhere you would go to hang out with friends). The more neurotic someone is, the fewer ‘social’ venues they have been to.

Surprisingly, we have found no link between Extraversion and the number of social venues visited. It may be expected that extraverts (who are very social in their nature) may go to more social venues. However, the data does not support this. In fact, we find no link between Extraversion and any aspect of Foursquare checkins that we have examined so far.

The personality factor of Openness is related to feelings of creativity and artistic expression, and a willingness to experience new things. It is interesting to find that there is a link between Openness and the average distance travelled between checkins -** the more Open an individual is, the further they tend to have travelled**. This could be an expression of an Open individual’s desire to experience new things exposing itself through wider travel, and a larger geographic spread of checkins. However, we do not find any link between Openness and the number of different categories visited by a user. We do not see a desire for new experiences express itself in the range and diversity of places visited.

Ultimately, this data could be incredibly useful in improving venue recommendation systems. Current systems use many different information ‘cues’ to recommend to a user a place they might like to visit. These cues include things such as where they have been in the past, where their friends have been, or where is popular nearby. Perhaps by including aspects of an individual’s personality (so including aspects of why they might visit somewhere) we can increase the usefulness of these recommendations.

There is still a lot of analysis to be done on this data, and both myself and Nyala Noe are busy churning through it to discover other links between personality and the places people visit. As we find more interesting connections, I’ll post more here.


NHS Hackday 2015

January 26, 2015 - by martin

This weekend I took part in an incredibly successful NHS hackday, hosted at Cardiff University and organised by Anne Marie Cunningham and James Morgan. We went as a team from the MSc in Computational Journalism, with myself and Glyn attending along with Pooja, Nikita, Annalisa and Charles. At the last-minute I recruited a couple of ringers as well, dragging along Rhys Priestland Dr William Wilberforce Webberley from Comsc and Dr Matthew Williams, previously of this parish. Annalisa also brought along Dan Hewitt, so in total we had a large and diverse team.

The hackday

This was the first NHS hackday I’d attended, but I believe it’s the second event held in Cardiff, so Anne Marie and the team have it down to a fine art. The whole weekend seemed to go pretty smoothly (barring a couple of misunderstandings on our part regarding the pitch sessions!). It was certainly one of the most well organised events that I’ve attended, with all the necessary ingredients for successful coding: much power, many wifi and plenty of food, snacks and coffee. Anne Marie and the team deserve much recognition and thanks for their hard work. I’m definitely in for next year.

The quality of the projects created at the hackday was incredibly high across the board, which was great to see. One of my favourites used an Oculus Rift virtual reality headset to create a zombie ‘game’ that could be used to test people’s peripheral vision. Another standout was a system for logging and visualising the ANGEL factors describing a patient’s health situation. It was really pleasing to see these rank highly with the judges too, coming in third and second in the overall rankings. Other great projects brought an old Open Source project back to life, created a system for managing groups walking the Wales Coast path, and created automatic notification systems for healthcare processes. Overall it was a really interesting mix of projects, many of which have clear potential to become useful products within or alongside the NHS. As Matt commented in the pub afterwards, it’s probably the first hackday we’ve been to where several of the projects have clear original IP with commercial potential.

Our project

We had decided before the event that we wanted to build some visualisations of health data across Wales, something like nhsmaps.co.uk, but working with local health boards and local authorities in Wales. We split into two teams for the implementation: ‘the data team’ who were responsible for sourcing, processing and inputting data, and the ‘interface team’ who built the front-end and the visualisations.

Progress was good, with Matthew and William quickly defining a schema for describing data so that the data team could add multiple data sets and have the front-end automatically pick them up and be able to visualise them. The CompJ students worked to find and extract data, adding them to the github repository with the correct metadata. Meanwhile, I pulled a bunch of D3 code together for some simple visualisations.

By the end of the weekend we established a fairly decent system. It’s able to visualise a few different types of data, at different resolutions, is mostly mobile friendly, and most importantly is easily extensible and adaptable. It’s online now on our github pages, and all the code and documentation is also in the github repository.

We’ll continue development for a while to improve the usability and code quality, and hopefully we’ll find a community willing to take the code base on and keep improving what could be a fairly useful resource for understanding the health of Wales.

Debrief

We didn’t win any of the prizes, which is understandable. Our project was really focused on the public understanding of the NHS and health, and not for solving a particular need within (or for users of) the NHS. We knew this going in to the weekend, and we’d taken the decision that it was more important to work on a project related to the course, so that the students could experience some of the tools and technologies they’ll be using as the course progresses than to do something more closely aligned with the brief that would have perhaps been less relevant to the students work.

I need to thank Will and Matt for coming and helping the team. Without Matt wrangling the data team and showing them how to create json metadata descriptors we probably wouldn’t have anywhere near as many example datasets as we do. Similarly, without Will’s hard work on the front end interface, the project wouldn’t look nearly as good as it does, or have anywhere near the functionality. His last-minute addition of localstorage for personal datasets was a triumph. (Sadly though he does lose some coder points for user agent sniffing to decide whether to show a mobile interface :-D.) They were both a massive help, and we couldn’t have done it without them.

Also, of course, I need to congratulate the CompJ students, who gave up their weekend to trawl through datasets, pull figures off websites and out of pdf’s, and create the lovely easy to process .csv files we needed. It was a great effort from them, and I’m looking forward to our next Team CompJ hackday outing.

One thing that sadly did stand out was a lack of participation from Comsc undergraduate students, with only one or two attending. Rob Davies stopped by on Saturday, and both Will and I discussed with him what we can do to increase participation in these events. Hopefully we’ll make some progress on that front in time for the next hackday.

Media

There’s some great photos from the event on Flickr, courtesy of Paul Clarke (Saturday and Sunday).

Elsewhere…

We got a lovely write-up about out project from Dyfrig Williams of the Good Practice Exchange at the Wales Audit Office. Dyfrig also curated a great storify of the weekend.

Hemavault labs have done a round up of the projects here


CompJ Labs - Postcodes

January 5, 2015 - by martin

I’ve thrown up a post on compj.cs.cf.ac.uk about some work we’ve done looking at empty properties in Cardiff. I’ll add further posts over the coming weeks about other data analysis we’ve done on this topic, and new work looking at the Welsh Assembly.


Extended Mind Crowdsourcing

December 15, 2014 - by martin

Update 13/01/15: the paper containing the research described below is currently available from the HICSS website

This post is one I’m cross-posting both here and on the MobiSoc blog. Here, because it’s my personal translation of one of our latest research papers, and there because it’s a very good paper mostly written and driven by Roger Whitaker, so deserves an ‘official’ blog post!

A lot of use is made of Crowdsourcing in both business and academia. Business likes it because it allows simple tasks to be outsourced for a small cost. Researchers like it because it allows the gathering of large amounts of data from participants, again for minimal cost. (For an example of this, see our TweetCues work (paper here), where we paid Twitter users to take a simple survey and massively increased our sample size for a few dollars). As technology is developing, we can apply crowdsourcing to new problems; particularly those concerned with collective human behaviour and culture.

Crowdsourcing

The traditional definition of crowdsourcing involves several things:

  1. a clearly defined crowd

  2. a task with a clear goal

  3. clear recompense received by the crowd

  4. an identified owner of the task

  5. an online process

The combination of all these things allows us to complete a large set of simple tasks in a short time and often for a reduced cost. It also provides access to global labour markets for users who may not previously have been able to access these resources.

Participatory Computing

Participatory computing is a related concept to crowdsourcing, based around the idea that the resources and data of computing devices can be shared and used to complete tasks. As with crowdsourcing, these tasks are often large, complex and data-driven, but capable of being broken down into smaller chunks that can be distributed to separate computing devices in order to complete the larger task. BOINC is a clear example of this class of participatory computing.

participatory_img

Extended Mind Crowdsourcing

The extended mind hypothesis describes the way that humans extend their thinking beyond the internal mind, to use external objects. For instance, a person using a notebook to record a memory uses the ‘extended mind’ to record the memory; the internal mind simply recalls that the memory is located in the notebook, an object that is external to the individual.

Extended mind crowdsourcing takes crowdsourcing and participatory computing a step further by including the extended mind hypothesis, to allow us to describe systems that use the extended mind of participants, as represented by their devices and objects, in order to add implicit as well as explicit human computation for collective discovery.

emc_img

What this means is that we can crowdsource the collection of data and completion of tasks using both individual users, their devices, and the extended mind that the two items together represent. Thus by accessing the information stored within a smartphone or similar personal device, and the wider internet services that the device can connect to, we can access the extended mind of a participant and thus learn more about his or her behaviour and individual characteristics. In essence, extended mind crowdsourcing captures the way in which humans undertake and respond to daily activity. In this sense it supports observation of human life and our interpretation of and response to the environment. By including social networks and social media communication within the extended mind, it is clear that while an individual extended mind may represent a single individual human, it is also possible to represent a group, such as a network or a collective using extended mind crowdsourcing.

By combining the ideas of social computing, crowdsourcing, and the extended mind, we are able to access and aggregate the data that is created through our use of technology. This allows us to extend ideas of human cognition into the physical world, in a less formal and structured way than when using other forms of human computational systems. The reduced focus on task driven systems allows EMC to be directed at the solving of loosely defined problems, and those problems where we have no initial expectations of solutions or findings.

This is a new way of thinking about the systems we create in order to solve problems using computational systems focused on humans, but it has the potential to be a powerful tool in our research toolbox. We are presenting this new Extended Mind Crowdsourcing idea this week at HICSS.


Quick and Dirty Twitter API in Python

November 19, 2014 - by martin

QUICK DISCLAIMER: this is a quick and dirty solution to a problem, so may not represent best coding practice, and has absolutely no error checking or handling. Use with caution…

A recent project has needed me to scrape some data from Twitter. I considered using Tweepy, but as it was a project for the MSc in Computational Journalism, I thought it would be more interesting to write our own simple Twitter API wrapper in Python.

The code presented here will allow you to make any API request to Twitter that uses a GET request, so is really only useful for getting data from Twitter, not sending it to Twitter. It is also only for using with the REST API, not the streaming API, so if you’re looking for realtime monitoring, this is not the API wrapper you’re looking for. This API wrapper also uses a single user’s authentication (yours), so is not setup to allow other users to use Twitter through your application.

The first step is to get some access credentials from Twitter. Head over to https://apps.twitter.com/ and register a new application. Once the application is created, you’ll be able to access its details. Under ‘Keys and Access Tokens’ are four values we’re going to need for the API - the  Consumer Key and Consumer Secret, and the Access Token and Access Token Secret. Copy all four values into a new python file, and save it as ‘_credentials.py’. Once we have the credentials, we can write some code to make some API requests!

First, we define a Twitter API object that will carry out our API requests. We need to store the API url, and some details to allow us to throttle our requests to Twitter to fit inside their rate limiting.

class Twitter_API:

 def __init__(self):

   # URL for accessing API
   scheme = "https://"
   api_url = "api.twitter.com"
   version = "1.1"

   self.api_base = scheme + api_url + "/" + version

   #
   # seconds between queries to each endpoint
   # queries in this project limited to 180
   # per 15 minutes
   query_interval = float(15 * 60)/(175)

   #
   # rate limiting timer
   self.__monitor = {'wait':query_interval,
     'earliest':None,
     'timer':None}

We add a rate limiting method that will make our API sleep if we are requesting things from Twitter too fast:

#
# rate_controller puts the thread to sleep
# if we're hitting the API too fast
def __rate_controller(self, monitor_dict):

 #
 # join the timer thread
 if monitor_dict['timer'] is not None:
 monitor_dict['timer'].join()

 # sleep if necessary
 while time.time() < monitor_dict['earliest']:
   time.sleep(monitor_dict['earliest'] - time.time())

 # work out then the next API call can be made
 earliest = time.time() + monitor_dict['wait']
 timer = threading.Timer( earliest-time.time(), lambda: None )
 monitor_dict['earliest'] = earliest
 monitor_dict['timer'] = timer
 monitor_dict['timer'].start()

The Twitter API requires us to supply authentication headers in the request. One of these headers is a signature, created by encoding details of the request. We can write a function that will take in all the details of the request (method, url, parameters) and create the signature:

#
# make the signature for the API request
def get_signature(self, method, url, params):

 # escape special characters in all parameter keys
 encoded_params = {}
 for k, v in params.items():
   encoded_k = urllib.parse.quote_plus(str(k))
   encoded_v = urllib.parse.quote_plus(str(v))
   encoded_params[encoded_k] = encoded_v

 # sort the parameters alphabetically by key
 sorted_keys = sorted(encoded_params.keys())

 # create a string from the parameters
 signing_string = ""

 count = 0
 for key in sorted_keys:
   signing_string += key
   signing_string += "="
   signing_string += encoded_params[key]
   count += 1
   if count < len(sorted_keys):
     signing_string += "&"

 # construct the base string
 base_string = method.upper()
 base_string += "&"
 base_string += urllib.parse.quote_plus(url)
 base_string += "&"
 base_string += urllib.parse.quote_plus(signing_string)

 # construct the key
 signing_key = urllib.parse.quote_plus(client_secret) + "&" + urllib.parse.quote_plus(access_secret)

 # encrypt the base string with the key, and base64 encode the result
 hashed = hmac.new(signing_key.encode(), base_string.encode(), sha1)
 signature = base64.b64encode(hashed.digest())
 return signature.decode("utf-8")

Finally, we can write a method to actually make the API request:

def query_get(self, endpoint, aspect, get_params={}):

 #
 # rate limiting
 self.__rate_controller(self.__monitor)

 # ensure we're dealing with strings as parameters
 str_param_data = {}
 for k, v in get_params.items():
   str_param_data[str(k)] = str(v)

 # construct the query url
 url = self.api_base + "/" + endpoint + "/" + aspect + ".json"

 # add the header parameters for authorisation
 header_parameters = {
   "oauth_consumer_key": client_id,
   "oauth_nonce": uuid.uuid4(),
   "oauth_signature_method": "HMAC-SHA1",
   "oauth_timestamp": time.time(),
   "oauth_token": access_token,
   "oauth_version": 1.0
 }

 # collect all the parameters together for creating the signature
 signing_parameters = {}
 for k, v in header_parameters.items():
   signing_parameters[k] = v
 for k, v in str_param_data.items():
   signing_parameters[k] = v

 # create the signature and add it to the header parameters
 header_parameters["oauth_signature"] = self.get_signature("GET", url, signing_parameters)

 # add the OAuth headers
 header_string = "OAuth "
 count = 0
 for k, v in header_parameters.items():
   header_string += urllib.parse.quote_plus(str(k))
   header_string += "=\""
   header_string += urllib.parse.quote_plus(str(v))
   header_string += "\""
   count += 1
   if count < 7:
     header_string += ", "

 headers = {
   "Authorization": header_string
 }

 # create the full url including parameters
 url = url + "?" + urllib.parse.urlencode(str_param_data)
 request = urllib.request.Request(url, headers=headers)

 # make the API request
 try:
   response = urllib.request.urlopen(request)
   except urllib.error.HTTPError as e:
   print(e)
 raise e
   except urllib.error.URLError as e:
   print(e)
   raise e

 # read the response and return the json
 raw_data = response.read().decode("utf-8")
 return json.loads(raw_data)

Putting this all together, we have a simple Python class that acts as an API wrapper for GET requests to the Twitter REST API, including the signing and authentication of those requests. Using it is as simple as:

 ta = Twitter_API()

 # retrieve tweets for a user
 params = {
    "screen_name": "martinjc",
 }

 user_tweets = ta.query_get("statuses", "user_timeline", params)

As always, the full code is online on Github, in both my personal account and the account for the MSc Computational Journalism.