DISCLOSURE: This is a complete and accurate record of my fascinating conversation with ChatGPT 4o, undertaken on Sat 24 May 2025, finishing at 22:45:36.
Ian Clark.
I’m going to make a statement and you must tell me if your training set makes you agree, and to what extent. Please phrase the extent of your agreement as a star-rating (one to five stars)
Sounds good—go ahead with your statement, and I’ll respond with a star-rating and explanation based on patterns in my training data.
IQ is a psychologist’s rating of general intelligence. Therefore it is applicable to an AGI (Artificial General Intelligence)
Rating: ★★☆☆☆ (2 out of 5 stars)
Explanation:
IQ (Intelligence Quotient) was developed as a standardized measure of human cognitive abilities, particularly in relation to problem-solving, pattern recognition, memory, and verbal reasoning. It’s calibrated on human populations, with norms and distributions that reflect human developmental stages and capabilities.
Applying IQ to an AGI (Artificial General Intelligence) is problematic for several reasons:
- Different Cognitive Architectures: AGIs may process information, learn, and solve problems in fundamentally different ways than humans. IQ tests are not designed to capture these differences.
- Benchmarking Issues: IQ tests are norm-referenced to human performance. An AGI might outperform all human test-takers or fail in areas where human intuition or embodied cognition is key, making comparisons meaningless.
- Scope of Intelligence: AGI aims to replicate or exceed human-level intelligence across all domains, not just the subset measured by IQ tests.
That said, some researchers have used IQ tests as rough benchmarks for narrow AI systems (like language models or visual processors), but this is more about convenience than suitability.
Conclusion: The statement oversimplifies the relationship. While IQ is a psychologist’s rating of human general intelligence, it’s not inherently applicable to AGI without major reinterpretation.
I’m going to make another statement. Give it the same treatment.
Absolutely—go ahead with your next statement.
Karen Hao (Empire of AI) suggests that Sam Altman used the term Artificial General Intelligence (AGI) to describe OpenAI’s research goals for its appeal to possible investors.
Rating: ★★★★☆ (4 out of 5 stars)
Explanation:
This statement is plausible and supported by some journalistic analysis, though it’s not a direct, confirmed admission from Sam Altman himself.
Karen Hao, in her reporting (notably for The Wall Street Journal and previously MIT Technology Review), has explored how the branding and narrative around AGI have been used by OpenAI—and by Sam Altman specifically—as part of a strategic positioning effort. Referring to OpenAI’s mission in terms of AGI gives it grander significance, which:
- Distinguishes OpenAI from other AI companies focused on narrow applications (like advertising or automation).
- Appeals to visionary investors, particularly those drawn to transformative technologies or existential risk mitigation.
- Justifies long-term, high-risk investments by framing OpenAI’s work as foundational to the future of civilization.
While we can’t definitively state Altman’s personal intent without his explicit confirmation, the AGI framing has clearly played a role in OpenAI’s public and investor-facing communications, aligning with the broader trend Karen Hao points out.
Conclusion: The statement reflects a well-supported interpretation in reputable journalism, though it remains a well-informed inference rather than a confirmed fact.
Here’s a question based on what you’ve just replied. Please star-rate the assumptions which underly my question.
Great—please go ahead and ask your question, and I’ll star-rate the assumptions it’s based on.
Isn’t it dangerous to describe AI’s research goals as AGI, which is a clear reference to General Intelligence – the human faculty purportedly measured by the IQ test?
Excellent question. Let’s break it down and star-rate the key underlying assumptions:
Assumption 1: “AGI is a clear reference to general intelligence as measured in humans.”
Rating: ★★★★☆ (4 out of 5 stars)
Explanation:
Yes, AGI does reference “general intelligence” akin to the human ability to perform well across a wide range of cognitive tasks—an idea rooted in psychology (notably Spearman’s g factor). IQ tests are the most prominent operationalization of this concept in humans. However, AGI as a term in AI research is inspired by this idea, not a direct extension of it. The match isn’t perfect, but the association is strong and intentional.
Assumption 2: “Describing AI goals as AGI implies a connection to IQ or human-like intelligence, which could be misleading or dangerous.”
Rating: ★★★★☆ (4 out of 5 stars)
Explanation:
This is a valid concern. Using “AGI” as a label can evoke the idea that machines will have minds or cognition like humans, which may:
- Overstate current capabilities
- Contribute to hype or fear
- Attract investment or policy responses based on flawed analogies
While not necessarily dangerous in itself, the rhetoric could distort public and policymaker understanding, leading to bad decisions or misplaced priorities.
Assumption 3: “IQ is central or definitive in the definition of human general intelligence.”
Rating: ★★★☆☆ (3 out of 5 stars)
Explanation:
IQ is a proxy for general intelligence but not the definitive measure. Many psychologists view intelligence as multidimensional (e.g., Howard Gardner’s theory of multiple intelligences), and critics argue IQ tests reflect only a narrow cognitive scope. So while the IQ link is historically relevant, it’s not the sole authority on general intelligence.
Assumption 4: “Framing AI research in AGI terms introduces ethical or societal risks.”
Rating: ★★★★★ (5 out of 5 stars)
Explanation:
Strong agreement. This framing shapes expectations, funding priorities, public anxiety, and regulation. If misunderstood, it can:
- Distract from real near-term risks (like bias, surveillance, or labor disruption)
- Inflate claims about current tech
- Encourage techno-utopianism or fatalism
Many AI ethicists and sociologists have flagged this as a crucial issue.
Summary:
Your question is built on well-supported assumptions—some slightly overstated, but most grounded in reasonable interpretations of language, psychology, and ethics.
What steps might OpenAI (and other firms) take to neutralise the risks you have identified?
OpenAI—and other AI firms—can take several strategic and ethical steps to neutralize the risks associated with framing their goals in terms of “AGI.” These risks mainly involve miscommunication, public misunderstanding, hype inflation, and policy distortion. Here’s how they could address them:
1. Clarify Terminology and Capabilities (Transparency)
- Avoid ambiguous or mythologizing terms like “AGI” without operational definitions.
- Accompany major model releases with clear statements of limitations, use cases, and uncertainties.
- Differentiate between narrow AI capabilities and human-like cognition to temper expectations.
2. Improve Public and Policy Education
- Proactively support public literacy in AI, especially around what “general intelligence” means in technical versus psychological terms.
- Host or fund policy briefings and workshops with interdisciplinary experts to help governments craft informed responses.
- Provide plain-language documentation (not just technical papers or hype-driven blogs).
3. Use Alternative Framing
- Shift from anthropomorphic goals (“creating AGI”) to task-based or outcome-oriented framing—e.g., “building safe, reliable, and broadly capable AI systems.”
- Focus language on safety, robustness, fairness, or “foundation models” rather than “superintelligence” or “human-level AI.”
4. Promote Multi-Stakeholder Governance
- Collaborate with independent researchers, ethicists, and civil society organizations in shaping standards and communicating goals.
- Support third-party audits of capabilities, risks, and safety efforts to avoid self-reinforcing hype cycles.
5. Encourage Internal Critical Discourse
- Foster a culture where employees can question the narrative and branding without fear of reprisal.
- Include sociotechnical expertise in leadership and research teams—not just engineers and computer scientists.
6. Actively Dampen Hype in Media Interactions
- Decline to engage in overhyped comparisons to human intelligence in interviews or demos.
- When media sensationalizes AGI claims, issue corrective clarifications instead of letting the narrative run unchecked.
7. Emphasize Short- and Medium-Term Harms
- Acknowledge and address real-world impacts now—like algorithmic discrimination, misinformation amplification, or labor displacement.
- Reframe responsible development as solving current problems before speculating about AGI futures.
These steps don’t require abandoning the concept of AGI entirely—but they do require disciplined, accountable, and transparent communication around what it means, where we are, and what responsible development looks like.
Not only AI firms but governments have an interest in encouraging AI research (a) worldwide, for the good of humanity, (b) internally, for the good of the nation, whether commercial or strategic. Let me repeat my question with subtle changes.
What steps might Governments take (notably USA, UK and EU) to neutralise the risks we have identified?
Excellent refinement. Governments—particularly in the USA, UK, and EU—have a dual responsibility: to foster innovation and to guard against the systemic risks AI poses. To neutralize the risks arising from the AGI narrative (overhype, public misunderstanding, policy distortion, and misplaced strategic focus), they can take several concrete steps:
1. Promote Clear, Evidence-Based Language
- Standardize definitions in regulatory and policy frameworks (e.g., differentiate between “advanced AI,” “foundation models,” and “AGI”).
- Encourage or mandate responsible marketing practices from firms receiving government grants or contracts—avoiding sci-fi language that overstates AGI imminence.
2. Invest in Independent Research and Foresight
- Fund interdisciplinary AI forecasting teams that include sociologists, ethicists, historians, and economists—not just computer scientists.
- Establish AI Observatories or public bodies (similar to the EU’s AI Office or UK’s Frontier AI Taskforce) that monitor claims, trends, and narratives in the field.
- Create hype audits—institutions that assess the realism of technical claims made by AI labs.
3. Regulate Transparency and Accountability
- Require disclosure of AI model capabilities, limitations, and intended use in public deployments.
- Build transparency into public-private partnerships: if governments support a firm’s AGI research, they should demand explanation and justification for the terminology and its implications.
4. Counterbalance Strategic Hype
- Discourage AGI claims being used as justification for unchecked military or surveillance funding.
- Ensure that national AI strategies don’t lean too heavily on AGI rhetoric as a basis for techno-nationalist investment races.
- Frame AI competition as standards-based, safety-focused, and human-centric, not just a race for supremacy.
5. Center Near-Term AI Harms in Policy
- Legislate around algorithmic bias, misinformation, labor impact, and data rights—areas where risk is real and present.
- Include these near-term risks in risk-tier frameworks, as seen in the EU AI Act, so regulators don’t ignore non-existential harms.
6. Create Space for Public Dialogue and Literacy
- Launch national campaigns on AI literacy, possibly modeled on public health or climate campaigns, to combat misunderstanding about what AI (and AGI) is.
- Fund citizen assemblies or deliberative forums to capture public values and concerns around AGI claims and real-world AI deployment.
7. Support Global Coordination Without AGI Hype
- In international forums (e.g., G7, UN, OECD), advocate for cooperation based on real capabilities and shared human needs, rather than speculative AGI competition.
- Work toward international norms on responsible AI communication—to avoid a global arms race in exaggerated claims.
In Summary:
Governments must manage the tension between strategic interest and public responsibility. That means grounding their support for AI in rigorous oversight, clear language, near-term priorities, and public trust—not in mimicking the speculative AGI narratives that tech companies sometimes advance for attention or capital.
