Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"the a.i. revolution is a revolution on humanity" 'obviously 'and
perhaps the a…
ytc_UgytPgoc5…
G
We need to write an AI kill switch algorithm - to prevent AI from taking over.…
ytc_UgwNmcILT…
G
Software development is provably a hard problem, and can thus not be automated. …
ytc_UgzuaW6lX…
G
AI will have no problem implementing the code once we hit AGI. And we WILL hit A…
ytc_UgzTKYxyO…
G
What AI? I believe AI means Anonymous Indians. They need humans to still check t…
ytc_UgwIhR_qo…
G
No AI is pretty stupid. It can't even write a novel without, having dozens of re…
ytc_Ugz2AcnBM…
G
You won't need to attack the artificial intelligence, only the people who dictat…
ytc_UgxOvlAcj…
G
Here’s another thing I thought of that will happen with AI. It already can happe…
ytc_UgwsSj4Ek…
Comment
AI will be far more and super dangerous once the quantum computer is fully developed. Because with gazillions of decision points retrieved at the speed of light, anything is possible.
youtube
AI Governance
2024-06-09T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw0XC_-THHo2COfBzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyD945JDoSOXFBT3Ap4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgxF4xIib9ns8ISnjMV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgydVZZ2cFPbEQwNdzF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgSnPrHfKqOy30P6R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyuai-Os-AzA2i8igl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzxyAhiKAHNJ7rqBBN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxfC7-W3zBm8lZcyfJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzP8KczjGBgVLVs-KN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw0Wc11viitE1pXhwV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"}
]