Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is very GOOD that AI and robots take all the jobs. People need to have fun, a…
ytc_UgyuCTej5…
G
AI in itself can be useful. Its the people controlling it that can be a risk…
ytc_UgzE6vxGa…
G
why would Google give aa shit about the rights of ai? they barely care about hum…
ytc_UgzIaYTFk…
G
And everyone who uses it to learn, learns errors without knowing it. It's the sa…
ytr_UgzqMcw2w…
G
AI has access to all internet and every book ever written but yet it can't spell…
ytc_UgxT2OoRJ…
G
Claude says he doesn't want to participate in actively intervening, yet he does …
ytc_Ugxf8t999…
G
Can ai take photos of ghosts ? Or are spirits able to use ai ?? Think about the…
ytc_UgzjEsVzz…
G
Ahhh.... this is kinda sad...
I always been told "Omg! You were born with talent…
ytc_UgyT3ZdvM…
Comment
I don’t think we’d be able to run and hide from these programs. In fact, when it does become smarter than us, why would it let us know beforehand?
The most frightening thing is that there’s nothing we (the average person) can do about it. It’s already happening and even if AI isn’t smarter than us, it’s already a progran with a huge blast radius — especially if it gets in the wrong hands by a bad actor.
youtube
AI Governance
2023-07-07T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwOf7FSslArYHYoTUR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRbBGHrmCxmNzHuxF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyvTUHooI1FGqwS_bd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxQf1gGAYwcoWcP2jl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwBUriXOFH_Y6ld5OZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugzh0XTBNW6-D23G7NB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyjpeLtCCJgd0GeS9l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzK6WEVqcDC7mbJVVx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy3fFM1L9G-kRgusIV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy82SeXIRoaaJjifNd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]