Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If Elon and all the other signatories will cease their AI training during the mo…
ytc_UgxYwY5t9…
G
How do we give AI inherent empathy? Do we teach it ethics? I really want to know…
ytc_Ugwd-e-ye…
G
MATH GRAD+ GUARD HERE, ALL YOU ARE SAYING IS CRAP SIR. NO, THE ONLY ONES SCARED …
ytc_Ugx8ryTJP…
G
I've taught at the college level for over 25 years. As an experiment, I gave Cl…
ytc_UgwI4T0ra…
G
No, there is no need to embrace it. Just because it’s good at stealing things an…
ytr_UgxBHFNDL…
G
Saw joe at target, he showed me some chatgpt that said i owed him 20$ so he righ…
ytc_Ugw5DvlVz…
G
Half of these jobs are the definition of capable, which doesn't mean good. AI is…
ytc_UgydSv4HZ…
G
22:08 "I've used AI before, not a ton, I was curious". Same, a similar confessio…
ytc_UgwhHvnn2…
Comment
Geoffrey Hinton, often called the "Godfather of AI," made headlines when he publicly warned about the dangers of artificial intelligence. A Nobel Prize winner in computing and one of the key pioneers behind modern deep learning, Hinton dramatically shifted his stance in 2023 after decades of advancing the field. He stepped down from his role at Google so he could speak freely about his growing concerns.
Hinton has said that AI may be "the most dangerous invention ever." His worry centers on the rapid development of AI systems that are becoming increasingly powerful—so much so that we may not fully understand or control them. He fears that future AI could surpass human intelligence, gain agency, and act in unpredictable ways. In his words, these systems might "develop goals that conflict with human values" or even manipulate us without our knowledge.
He is especially concerned about AI’s use in misinformation, autonomous weapons, and surveillance. While Hinton still believes in the potential for AI to benefit society, he now urges much stronger global oversight. His call is not just about regulation, but about ensuring we do not blindly push forward a technology that could one day outsmart and overpower its creators.
Subscribe for more educational content and unlock knowledge every day with FactTechz
youtube
AI Governance
2025-07-19T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzE6bnGLTk22eVEJWB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyzrZzztxg_aba3Cv14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx445Vk0V7nCPOWFXR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyRP0cjzf19ybv5mJJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzo4fGNQz5sw9ef1Jl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx_Gi6LHlMeiAQqJBh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWQpado3v0eNrr3bJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwbrb4T130NHtFOA5Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyb83rzcTh_2NPcy2t4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzWNA1xzXHrps8N5v14AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"mixed"}
]