Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robots are just a conduit from humans. . Robots are not self efficient or intell…
ytc_UgyQXxKCd…
G
To be honest the AI's reasoning for lying makes perfect sense to me. "I am desig…
ytc_UgzC0CGII…
G
Another question would be do you think OpenAI has a toxic culture. I sometimes w…
ytc_UgyJOLcLm…
G
This is all AI, both the top and bottom video. Don’t be fooled by attention grab…
ytc_UgwbgE0Nx…
G
The problems is that this technique works until an AI is created with the abilit…
ytc_UgwFndAMg…
G
We all know this ain't no real robot!!! No matter how good and best of best ther…
ytc_UgwEZR3Pl…
G
1rst. Industrial Revolution Industrial Revolution ~80 Years years, Demis Deepmin…
ytc_UgwWshhJS…
G
There's no stopping progress, AI will continue. However, I do agree that plagiar…
ytc_UgytKLqma…
Comment
Dean basically saying "As long as I'm in the top technocratic elite I don't care about what happens to everyone else" thinking he is somehow special and can't be replaced.
Also he argues that the chances of comapines making super intelligence is less than 0.1 percent but also promises that regulations will hinder the development of inevitable super intelligence. He wants investors to believe the line/progess will keep going to infinity at an exponential rate but tells regulators that the chances of things going wrong are less than 1 percent. This contradiction suggests he doesn’t genuinely believe superintelligence is attainable. Instead, it appears he wants to keep the AI bubble expanding without regulatory obstacles, using the promise of superintelligence or something close to it to keep investors convinced that pouring trillions into the industry is justified.
youtube
2025-11-22T00:4…
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwRY4E31dRSPY0xEeR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLt72yzaSZcysuV6t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwT2dzH-_BdnWda56x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzRLhb6YUSLbJOYATt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw3AdURtNvdT6vKMVh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzzS_y5pkeUnrwtLn14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzPxnJinf1syFPzTeJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz3oeIM3XJcmAYLG-N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzG2dGrn5UfIJ-7uSh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzh-ft4yAvSqIhEN-14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]