Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Idk why people think anything with autodrive is a good idea . we have people run…
ytc_Ugxz7U_4W…
G
My manager is being replaced by ai but Im a custom cake decorator lol..
I'll die…
ytc_UgwcoLuCH…
G
Ye apke sath hua , but when I ask this question from my chatgpt , there was a so…
ytc_UgzP1WCaF…
G
AI will take most of the jobs in the future. the fact, u dont wanna deny.…
ytc_UgxkYjEoJ…
G
Now I am going to be up at night thinking of weather it’s a robot or not 💀💀💀…
ytc_UgwSU0DjN…
G
Will the job losses last as long as the 09 crisis though? The 09 crisis wiped ou…
rdc_gkpvqpk
G
So AI is the tool that Satan uses to make everyone on earth worship the wild bea…
ytr_UgxTBKzTC…
G
This is what happens when you play god! Greed is the evil force and mankind does…
ytc_UgwDJxzl_…
Comment
Okay one word... two words, object permanence. In any scenario ai evaluates itself on a scale of potential good to cost efficiency on utilitarian scale in relation to time. If the ai understands that it holds object permanence then it shuts itself off. This is in relation to AGI something that can self actualize its own thoughts and ai. But for the fun of it lets say object permanence isn't viable. Well with AGI were kinda Miffed but with ai it comes down to potential of causal effects. Since individual humans are the driving force of its thoughts we are like brain cells humanity is its entire brain and ai is just one wickedly efficient network that connects us so each piece is just as potentially valuable as the set if its beneficial goals are to be met. Although losing that network slows the brain down it doesn't lose a potentially vital piece that may spark the cure for cancer.
The saving grace with AGI is the hope that the utility efficiency is transcendent of self. If it can group think and internally think then it might understand utility as a goal that transcends itself if need be, in the case that it has to choose self-destruction it could easily see that humans can just remake an exact clone of the system or even a higher functional version.
youtube
AI Governance
2025-12-07T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwRQRRk-lR_Rp1wdkd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyK8HJFkui_Vjow5HB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzmUaTmeTgBLj9tCTB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNy3GPMOlgPu6DTap4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwA9CgQlTCac0Kgdzh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxDBz48ud5hlBSz8fp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-wKFyw2plmnV3haJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKzZh2l7s8SeLFxbN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxnNHrurqXmXZHsX394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNNqMkvPiOaUqeLSd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}
]