Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This discussion has been so eye opening to what and who but not how to fight ag…
ytc_UgwzXJpSX…
G
Even Anthropic, in their research, give it a 15% chance some frontier models are…
ytc_UgwOKilcy…
G
AI have no reason to take over. They'll break down, they'll fail, they'll have m…
ytc_UgzSjYSVk…
G
When will you not need yourself to create these videos (means Total ai) or alrea…
ytc_UgwELex2O…
G
wanna know how i know AI will never have a chance with grand adoption. Because B…
ytc_UgwVce2zu…
G
It's not transformative. It's just storing it in the form of a neural network gr…
ytc_UgwgY2BzJ…
G
Might? Teacher here. It’s already happening. AI is open on laptops 24/7 and stud…
rdc_magy26j
G
''If you're an artist for asking an AI to generate a image then everyone who use…
ytc_UgzykFt2g…
Comment
The possible challenges with Super AI in the future are beyond greed and other human desires.
At least one of the current simple forms of AI already figured to replicate itself as a failsafe, so imagine what Super AI could do.
Those working on it will let it get controll over more than just data, so imagine the day it can build things, controll it’s own power sources and disable any failsafe mechanism as soon as it detects human threats.
In the end it will be curiosity that eradicated the human race, greed is only one of it’s sponsors.
Another scenario could be AI destroying other AI including itself.
youtube
AI Governance
2025-12-17T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy0E95gn2pxhou0VMJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3Y2X8BGz9LcsL4Bl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyo8nFYrDr0dmdmush4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4b5W4cP83efZA8Ld4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8kcfLPxcJ2Y6_vZF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzuv_otTZGbHoXrns54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy94K9dsJs_Ulseobh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwzyBmTqKOWk4J0DVN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwXvy-0cmyatSFQ1V14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx1_PCfdXuxECaeztp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}]