- Workplace networking site LinkedIn is harvesting data from Australian profiles
- Users need to ‘opt out’ of the system if they do not want their posts pillaged
- READ MORE: Jobs under threat of AI
Social networking site LinkedIn has been accused of harvesting unwitting Australian users’ data to train its artificial intelligence ‘models’ without letting them know.
Creative media expert Dr James Birt blasted the tech giant for making users opt out of a policy they did not even know existed – after automatically setting their accounts to agree to have their profiles’ data pillaged.
‘This showcases the dark side of big tech,’ the Bond University associate professor told news.com.au.
‘While users can opt out, the setting is enabled by default, which raises concerns about informed consent.
‘The absence of a proactive opt-in is a typical example of how big tech leverages user apathy or a lack of awareness to further its AI initiatives.’
The setting, known as ‘Data for Generative AI Improvement’, has been automatically switched on for users outside the EU, EEA, UK or Switzerland, giving permission for LinkedIn and unnamed ‘affiliates’ to ‘use your personal data and content you create… for training content creation AI models’.
Leaving it on allows the app to share users’ activities with its unnamed ‘affiliates’ for the purpose of ‘training content creation AI models’.
The includes anything that someone posts or even the contents of a users’ profile.
Turning off the setting will stop LinkedIn from harvesting data going forward but will not erase what it has already taken while the setting was active.
LinkedIn is not the only mainstream app to ‘scrape’ users’ data for its own benefit.
Meta, the company behind Facebook and Instagram, confirmed earlier this month it had also stored data on Australian adult users’ photos and posts since 2007.
Meta made the admission when its privacy policy director, Melinda Claybaugh, appeared before an inquiry and said the company harvested data when pressed by senators.
Regulations surrounding this kind of behaviour has tightened in Europe where companies need permission to store users’ data, but in other areas like Australia this courtesy is not required.
Dr Birt said the decision to automatically opt users in for this kind of practice exemplified the ethical concerns around personal data storage.
LinkedIn states that it uses generative AI for ‘a variety of purposes’ including in its writing assistant which helps users draft messages.
Microsoft owns the platform and these AI models are on LinkedIn but not Microsoft’s Azure OpenAI, which created ChatGPT.
LinkedIn spokesperson Greg Snapper clarified the app was ‘not sending data back to OpenAI for them to train their models’.
Daily Mail Australia has contacted LinkedIn for comment.
In order to turn off the feature in the app, users have tap their profile, go to settings, then data privacy and finally access ‘data for Generative AI Improvements’.
From there the setting can be turned off.
When users click ‘learn more’ at the final stage, the app explains its AI usage.
‘This setting applies to training and finetuning generative AI models that are used to generate content (e.g. suggested posts or messages) and does not apply to LinkedIn’s or its affiliates’ development of AI models used for other purposes, such as models used to personalise your LinkedIn experience or models used for security, trust, or anti-abuse purposes,’ it states.
In LinkedIn’s generative AI FAQs, the app claims it will ‘seek to minimise personal data in the data sets used to train the models’.