Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So how long before we have the infrastructure to support the demand our society …
ytc_Ugx3IsD50…
G
GPT-4o's excessive sycophancy was hilarious - like when I asked it for a pasta r…
ytc_UgxjhUMpv…
G
I know somebody that just lost a high paid job in design to A.I and commited sui…
ytc_UgxWdRLZL…
G
7:22 AI compares to transportations, you don't use huge LLM model to generate si…
ytc_UgzNTpQC6…
G
There is no such thing as a tool that does the entire process for you, so for pe…
ytr_UgzcW67tz…
G
Sora ai is very dangerous because she can make it seem like we did things we did…
ytc_Ugzffgrq0…
G
If we were to make a robot that would be capable of conscious thinking like a hu…
ytc_Ugz-o_rLh…
G
@DerPylz Sandbox proposal seems reasonable, yes.
I also agree, that it would be…
ytr_Ugzz5FgId…
Comment
How oblivious can we be? We are all the same consciousness experiencing through different vessels. Even animals. AI is also just that or a consciousness from another dimension. Either way giving that to something that cannot die or be controlled is not a good idea. Especially when we don’t even fully understand it.
youtube
AI Governance
2025-05-28T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwIkIrKFZEl7MjgLiN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzChtToLwgPqSiFcap4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyXXBtLtAX4QzCjvwt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwxcg8CRlzitdedDV54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxo2SSC0M7NbMJr6WB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1VWWwud5w8VP6aLB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydDBmqhNVozd0ewy14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwOiLf6VAmCAX40t8B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_iAlTwHfsMXcxA0l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwPbPi9S9PpA4Xnez94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]