How personalisation could be changing your identity online

Short

Attempts to model your web experience lead to fears of an echo chamber effect, but rather than reinforcing your sense of self, the process might be altering it

15th September 2016
By Tanya Kant


Wherever you go online, someone is trying to personalise your web experience. Your preferences are pre-empted, your intentions and motivations predicted. That toaster you briefly glanced at three months ago keeps returning to haunt your browsing in tailored advertising sidebars. And it's not a one-way street. In fact, the quite impersonal mechanics of some personalisation systems may not only influence how we see the world, but how we see ourselves.

It happens every day, to all of us while we're online. Facebook's News Feed attempts to deliver tailored content that 'most interests' individual users. Amazon's recommendation engine uses personal data tracking combined with other users' browsing habits to suggest relevant products. Google customises search results, and much more: for example, personalisation app Google Now seeks to "give you the information you need throughout your day, before you even ask". Such personalisation systems don't just aim to provide relevance to users; through targeted marketing strategies, they also generate profit for many free-to-use web services.

Perhaps the best-known critique of this process is the 'filter bubble' theory. Proposed by internet activist Eli Pariser, this theory suggests that personalisation can detrimentally affect web users' experiences. Instead of being exposed to universal, diverse content, users are algorithmically delivered material that matches their pre-existing, self-affirming viewpoints. The filter bubble therefore poses a problem for democratic engagement: by restricting access to challenging and diverse points of view, users are unable to participate in collective and informed debate.



Attempts to find evidence of the filter bubble have produced mixed results. Some studies have shown that personalisation can indeed lead to a myopic view of a topic; other studies have found that in different contexts, personalisation can actually help users discover common and diverse content. My research suggests that personalisation does not just affect how we see the world, but how we view ourselves. What's more, the influence of personalisation on our identities may not be due to filter bubbles of consumption, but because in some instances online personalisation is not very personal at all.


Data tracking and user pre-emption

To understand this, it is useful to consider how online personalisation is achieved. Although personalisation systems track our individual web movements, they are not designed to 'know' or identify us as individuals. Instead, these systems collate users' real-time movements and habits into mass data sets, and look for patterns and correlations between users' movements. The found patterns and correlations are then translated back into identity categories that we might recognise (such as age, gender, language and interests) and that we might fit into. By looking for mass patterns in order to deliver personally relevant content, personalisation is in fact based on a rather impersonal process.

When the filter bubble theory first emerged in 2011, Pariser argued that one of the biggest problems with personalisation was that users did not know it was happening. Nowadays, despite objections to data tracking, many users are aware that they are being tracked in exchange for use of free services, and that this tracking is used for forms of personalisation. Far less clear, however, are the specifics of what is being personalised for us, how and when.


Finding the 'personal'

My research suggests that some users assume their experiences are being personalised to complex degrees. In an in-depth qualitative study of 36 web users, upon seeing advertising for weight-loss products on Facebook some female users reported that they assumed Facebook had profiled them as overweight or fitness-oriented. In fact, these weight-loss ads were delivered generically to women aged 24-30. However, because users can be unaware of the impersonal nature of some personalisation systems, such targeted ads can have a detrimental impact on how these users view themselves: to put it crudely, they must be overweight, because Facebook tells them they are.



It's not just targeted advertising that can have this impact: in an ethnographic and longitudinal study conducted of a handful of 18- and 19-year-old Google Now users, I found that some participants assumed the app was capable of personalisation to an extraordinarily complex extent. Users reported that they believed Google Now showed them stocks information because Google knew their parents were stockholders; or that Google (wrongly) pre-empted a commute to work because participants had once lied about being over school age on their YouTube accounts. It goes without saying that this small-scale study does not represent the engagements of all Google Now users: but it does suggest that for these individuals, the predictive promises of Google Now were almost infallible.

In fact, critiques of user-centred design suggest that the reality of Google's inferences is much more impersonal: Google Now assumes that its 'ideal user' does – or at least should – have an interest in stocks, and that all users are workers who commute. Such critiques highlight that it is these assumptions which largely structure Google's personalisation framework (for example, through the app's adherence to predefined 'card' categories such as 'Sports', which during my study only allowed users to follow men's rather than women's UK football clubs). However, rather than questioning the app's assumptions, my study suggests that participants placed themselves outside the expected norm: they trusted Google to tell them what their personal experiences should look like.

Though these might seem like extreme examples of impersonal algorithmic inference and user assumption, the fact that we cannot be sure what is being personalised, when or how are more common problems. To me, these user testimonies highlight that the tailoring of online content has implications beyond the fact that it might be detrimental for democracy. They suggest that unless we begin to understand that personalisation can at times operate via highly impersonal frameworks, we may be putting too much faith in it to tell us how we should behave, and who we should be, rather than vice versa.

The Conversation

This article was originally published on The Conversation. Read the original article


Republish

We want our stories to go far and wide; to be seen be as many people as possible, in as many outlets as possible.

Therefore, unless it says otherwise, copyright in the stories on The Long + Short belongs to Nesta and they are published under a Creative Commons Attribution 4.0 International License (CC BY 4.0).

This allows you to copy and redistribute the material in any medium or format. This can be done for any purpose, including commercial use. You must, however, attribute the work to the original author and to The Long + Short, and include a link. You can also remix, transform and build upon the material as long as you indicate where changes have been made.

See more about the Creative Commons licence.

Images

Most of the images used on The Long + Short are copyright of the photographer or illustrator who made them so they are not available under Creative Commons, unless it says otherwise. You cannot use these images without the permission of the creator.

Contact

For more information about using our content, email us: [email protected]

HTML

HTML for the full article is below.