AI Might Put People’s Job Security At Risk But More Positions Are Being Created To Review AI Models And Their Inputs

The thought of people’s jobs being at risk due to the advanced world of AI is a concern that has been debated on for months. After all, job security is a big deal and no one wishes to get replaced at the hands of technology.

But wait, we might be getting a little ahead of ourselves because while the threat remains, a new wave of jobs is actually getting rolled out. The latter is designed to only focus on tasks linked to regulating AI models and the various types of inputs and outputs they generate.

Ever since November of last year, we heard about tech giants, huge business leaders, and even those in the world of academia expressing fear that they will soon be extinct because AI is reigning supreme. After all, when you have technology doing a better job for free, why would you employ someone and pay them a share, right?

Remember, Generative AI is designed to allow algorithms based on AI technology to produce the most real or lifelike behavior. This could be pictures, text, prompts, and whatnot. Moreover, it’s trained on the best types of data so that again is a point worth pondering.

In the end, you get the most carefully and intricately designed presentations that are similar to what a qualified professional could produce. And therefore, the fear in place was justified for obvious reasons.

Analysts predicted how close to 300 million jobs may soon be taken up by the world of AI and that entails both office positions as well as administration. Other fields at threat included supportive tasks, engineering, law, and even architecture. Similarly, finance, business, and social sciences were also a part of the list.

Such inputs received by AI models and the outputs produced really do require guidance and must be reviewed by humans. In the end, it creates the best careers as well as side positions.

Now, new roles are rising up and they include the chance to review AI. The latest on this front happens to be linked to a firm called Prolific, a firm that links AI developers to those related to research. They are literally compensating people after hiring them to produce reviews of AI material.

The firm will pay all employees a salary to go through AI-based outputs and gauge whether their quality is up to the mark or not. And we’re talking nearly $12 per hour while the bare minimal payments are fixed at an hourly rate of $8.

Moreover, human reviewers receive guidance through clients at Prolific and they include some big names such as Oxford, Google, and UCL. The latter helps these employees along the way, giving them knowledge about the various kinds of inaccurate and harmful materials they might be coming across.

As expected, they will be required to give consent to taking part in such research practices. One such worker unveiled to the media outlet CNBC how he utilized Prolific on numerous instances to gather his verdict regarding the standard of work that AI models were producing.

Replying anonymously, he says there were multiple occasions where he stepped in because AI models went haywire and generated inaccuracies. Hence, they needed to be corrected to make sure replies weren’t unsavory.

Similarly, he spoke about more occasions where the models were rolling out things that created major problems such as AI prompting users to take part in the purchase and use of drugs. Wow, how’s that for a reality check?


Read next: AI Pioneer Says AI Will Become a Threat in 5 Years
Previous Post Next Post