Critical Thinking, AI, and Personality Tests

From the early 1950s, Artificial Intelligence (AI) was theorized in the perpetual human quest to imitate Mother Nature. Researchers envisioned developing a system that thinks and acts like the brain, seeking to replicate neural connections by creating digital neural connections.

AI has evolved through various stages (from Turing machines to autonomous vehicles and deep learning) (https://robankhood.com/deep-learning/intelligence-artificielle/histoire-intelligence-artificielle) before becoming widespread in the public domain as the generative AI we know today.

This recent and rapid development is what will stimulate humans to sharpen their critical thinking.

How to develop critical thinking in the face of generative AI that will revolutionize the world of information tomorrow?

We could discuss critical thinking within AI itself (its mathematical and algorithmic choices to generate text, an image, or music), the work of critical thinking thanks to AI and what it reflects back to us, or the end of critical thinking due to its potential atrophy, but the focus of this article will instead be “critical thinking in the face of generative AI, particularly in personality tests.”

Developing critical thinking involves not only a technical understanding of what AI can and cannot do but also a reflection on the ethical, social, and personal implications of its increasing integration into our lives.

AI is increasingly influencing our daily decision-making, from product recommendations to medical choices to media information perception.

  1. AI and decision-making: It is crucial to understand the processes behind these automated decisions and to evaluate their reliability and potential biases.
  2. Ethical issues and AI: AI raises complex ethical questions, especially regarding privacy, security, and biases. Developing critical thinking will help navigate these ethical issues and promote responsible uses of AI.
  3. AI in combating fake news: AI is increasingly being used to create deepfakes and other sophisticated forms of disinformation.

A simple definition could be:

“It is a skill: knowing how to evaluate and trust information in an informed and founded manner.”

In essence, it involves rigorously evaluating the quality of sources and content, knowing the conditions that make a source reliable, which contents are plausible in light of the best available knowledge, which arguments are solid and relevant, and what evidence is supported by rigorous methods.

Most of our knowledge is not produced by ourselves; it is selected from sources we trust or do not trust, within our relational bubble.

Generative AI is developed by humans based on human functioning.

It generates texts, images, sounds, videos based on its knowledge and statistical and mathematical associations that it finds relevant in its relational bubble.

To develop critical thinking, we can apply the following method:

  1. Verify sources: Always investigate the origin of the information. Find out who the author is and what their credibility is.
  2. Cross-check information: Check if multiple reliable sources report the same information. This helps identify fake news.
  3. Analyze the content: Be aware of inconsistencies, factual errors, and possible biases in the content.
  4. Be skeptical: Maintain a skeptical mindset, especially in the face of information that evokes strong emotional reactions.
  5. Continuous education: Stay informed about the latest techniques and trends in false information and digital media.

In the face of multi-channel information overload, how do we keep a cool head?

Despite this method, some media actors publish erroneous information and manage to be deceived by algorithms.

Carl Bergstrom and Jevin West from the University of Washington recently published “Calling Bullshit: The Art of Skepticism in a Data-Driven World” (https://www.penguinrandomhouse.com/books/563882/calling-bullshit-by-carl-t-bergstrom-and-jevin-d-west)  where they discuss that with the abundance of misinformation, disinformation, and fake news, it is becoming increasingly difficult to know what is true.

Our media environment has become hyperpartisan.

Science is driven by press releases.

The startup culture elevates bullshit to the rank of major art.

We are well equipped to spot old-fashioned bullshit based on sophisticated rhetoric and vague words, but most of us do not feel qualified to question the avalanche of news presented in the language of mathematics, science, or statistics.

Questions to ask:

  • Are the numbers or results too good or too dramatic to be true?
  • Does the claim compare comparable items?
  • Does it confirm your personal biases?
  • Should we seek the truth or the least bad information available?

What attitude to adopt?

Artificial intelligence is revolutionizing decision-making in many fields, offering data analysis and processing capabilities far beyond human abilities.

In medicine, for example, AI can analyze medical images with high accuracy, helping doctors diagnose diseases more quickly and accurately.

In the financial sector, AI algorithms are used to analyze market trends and advise on investments. These advances offer unprecedented opportunities to improve the efficiency and quality of the decisions made.

But the reliability and accuracy of AI systems depend heavily on the quality of the data they are trained on and their design.

Like humans, there are algorithmic biases.

AI in psychometric data analysis.

We are about to launch an AI-enhanced version of our test reports, especially in T-Persona.

These reports will never replace human support, but they will serve as a basis for more precise and detailed analysis.

And to provide reliable and rigorous data, we have trained our AI using several research sources that allow us to generate reliable reports.

These augmented reports will not be used to replace professionals (recruitment, orientation, and coaching) but rather to assist them in their efforts and save them time by providing conclusions they can use wisely.

We will therefore support you in maintaining critical thinking in your areas of expertise..

Conclusion

The increasing dependence on AI can lead to a lack of human vigilance, or “automation complacency,” where humans may become too confident in AI recommendations and neglect their own judgment and expertise.

It is therefore essential to find a balance between using AI to improve decision-making and maintaining an appropriate level of human involvement, especially in situations where intuition, ethics, and human judgment are irreplaceable.

This implies continuous training for AI users to understand its limitations, features, and how it complements human judgment, rather than replacing it, as well as training, from an early age, in the learning of critical thinking across all taught subjects (https://journals.openedition.org/ctd/8256)..

Now that foolishness has evolved, we must relearn the art of skepticism.

As my friend Albert E. said:

« The important thing is not to stop questioning. Curiosity has its own reason for existing. »