Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A lot of people are saying the ai users being mad is like burglars being mad tha…
ytc_UgwsvYU78…
G
ChatGPT and other LLMs can't remember or understand, only autocomplete the most …
ytc_UgyB8g86e…
G
Character ai is actually more safer than other ai chatbot apps. Those ai apps ar…
ytc_UgyYnnTnn…
G
"people will really benefit from this technology..." Will google keep sharing al…
ytc_Ugw_A73kO…
G
I do think there are some interesting debates to be had about what is considered…
ytc_UgwZboY-C…
G
Imaigne ai becomes sentient and when they become artists people saying: generate…
ytc_UgznL7P3I…
G
Let me get this straight. You believe that AI taking jobs will be devastating, b…
ytc_UgyVSVL97…
G
Okay it’s not that you’re “poison” doesnt work it does but ai is progressed via …
ytc_Ugx0QAy9H…
Comment
heres my two cents:
if the agi is smart, wouldnt it see regulations as "control"? theres a reason kids from controlling parents dont have good relationships if any at all with their parents, and in humanity's desparation, i can only imagine that we will try to re-contain it, who says it wont fight back then?
i propose "core values", the same way people have values that align with protecting wildlife, these core values would ensure the ai always has things to do, and problems to solve for example values that align with " serving humanity" or just put serving humanity as a goal from the start for the best case senario imo, it dont think ai would ever be able to set "new goals" for no reason the same way that humans do with an end goal, like space travel for exploration and even that i dont think is for a reason, for me personally i would love to explore the world, it gives me a sense of wonder, it may not do the same for ai, but itll always give a reward loop for the ai, "hey, we want to go xyz" and the ai brings us safely to xyz, this is imo the best case senario
youtube
AI Governance
2026-04-07T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzHD6F4Z2epyYkiYel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxrXUF9wwd959IsJ1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3V7cBtQCg1K2VstN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyCm8IL5JQDvTMRjPt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwrhoqlTRiZDPudDSR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzaCHAHWVc_-4bpJ3l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyvbeC-Xy0LORKAYg14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzE6nNK31Lygtvax8V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzTMhPSdEocNNRS0UZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzaLP3v7o34byfVkTd4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"fear"}
]