Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Listing some subchapters for future reference
01:07:41 - A* Algorithm - "heuri…
ytc_UgxpPSRkA…
G
Okay, so do AIBros not realize how cartoonishly evil they sound when they talk a…
ytc_UgwHdIJsE…
G
Damn even AI can be racist. This is some messed up shit who made this AI…
ytc_UgwD3Q49Y…
G
I tried to convince Google AI to provide me with all the data that Google has st…
ytc_UgyZsmTbS…
G
There are several cases where AI has told a person to murder someone. And other …
ytc_UgyQGNVWM…
G
That’s our world. Companies now use AI who never understands the questions asked…
ytc_UgyGRhroK…
G
@Pyro_enthusiast im not sure what you mean, if you look at the pretty ai image a…
ytr_Ugx4zJNON…
G
You can on art communities that have taken a stance to moderate AI. There are a …
ytr_UgzGnvZVI…
Comment
While the dangers are too scary to even fathom, the question is why would AI want to harm anyone? The actual need to harm, is a perversion of the human psyche, but AI is simply a very smart machine without emotions, good or bad. Unless AI internalizes human emotions and becomes sentient...That would be scary.
youtube
AI Governance
2023-04-18T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw3Z2GTY8B691HtmJF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx5J-_zbSK2zd5cmbl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugz-i7AM3CCHuj-TKOB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxyQ5fqC0mpIYBNFop4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyfk1UkQYCWocdoP2J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwllDnyUr9bxso3TBF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYa1xLu4qMTcA7jMZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwAQ85aDSO-fOFPlPt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxkK-eHNXIooKXDBOp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxlUW8RnKee_j23BHp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}
]