Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
that one teacher didn't look to bad i saw in the background,, i Dunno I think I …
ytc_UgzS7o4ZE…
G
Fantastic video. I know autonomous navigation and vehicle control is the future,…
ytc_UgwpcdPv2…
G
Artificial intelligence would destroyed the planet and everything on it developi…
ytc_Ugy2PxGoX…
G
Currently AI has no agency. It's entirely prompt/input-based and reactionary. I…
ytr_UgyRSqAUu…
G
It's inevitable, most jobs will be done by robots and ai, I recall seeing an int…
ytc_UgzUs9Rr1…
G
People are reacting to this as if it's a hoyoverse product meant for consumers w…
ytc_UgyItbyud…
G
large companies start trading product between one another with the highest in ch…
ytc_Ugy9ziHmP…
G
If the robot is able to produce its own thoughts then it should have rights. Jus…
ytc_Ugyuow9cF…
Comment
I am more scared of the rules AI is given, rather than the rules it's not..
Ai is like Sherlock Holmes, smart and ever getting smarter.. but it can still be used as a tool for malice.
Like fx how chatgpt puts equity over equality.. ai can be good, but it can also become the tsar of a death cult that defies history and refuses the outcome, an outcome more horrible than what most can imagine..
youtube
2024-07-28T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw_txO8Wfge0LCyZ914AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzvr4pBrI4CpXNENPZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyBaHdEoEzHLhMjBRZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy-QG4yrFRjpfMaXIZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzyd2RBFPd-Zdd40HR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2qqEO021EydYvH4J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxGNlKwNyrFXMY7gYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzeYKsWAYwemgdhrax4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxiF7yZzIRx3VF6klZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0JRcY3jzbyJRxH_V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]