Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The guy that doesnt need to look for a job doesnt see a problem with AI destroyi…
ytc_UgxlAg_Lq…
G
My AI I was talking to - suggested that I should download free PDFs about rare o…
ytc_Ugyy63d50…
G
So many AI tools out there now, it's overwhelming. AICarma could help brands fig…
ytc_UgwILcNCS…
G
Why did he come to the US? Bc his parents have money and don’t want him to be a …
ytc_Ugww5Sl_I…
G
Andy Jassy is a POS. The first thing Amazon should replace is him. An AI would d…
ytc_UgyrZZEHi…
G
Imagine an AI agent, with access to the internet, with an understanding of robot…
ytc_UgxkJ4u9r…
G
@PKorp-PriyankaKalyanby not involving in social media means she never posts her…
ytr_Ugzb5DBZm…
G
@Bob-the-1-and-only-blob-fish Yeah, recreating an image for you where someone c…
ytr_UgwC-vcMv…
Comment
A bit late watching this, but really i think if it is possible for ai superintelligence to form from using human data, we're probably cooked at some point.
It's not enough to simply find ways to make it not want to get rid of us. We would need to find ways to make it want to actively protect us. With so many governments and people making ai, all it takes is one person to make a malevolent ai. We need the other superintelligent Ai to also guard against that.
Maybe if we can get it to look at us like dog owners look at their pets... ain't nobody hurting my dog if I can help it.
youtube
AI Governance
2025-07-29T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugx3IsD50rxgNmaMGbF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYrnT5SMdeZQdiO4Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwt9XEw2wk-3NWQjil4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmSQdxyHmtVuBMtb94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwHMU3yRZdAqDlrpah4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"unclear"},
{"id":"ytc_Ugzq0swf4aQsKWdDz4h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzE9LTN8mp4wz98zb54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugx8EAkO17krzfjDyYt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwegzZ5J77BqrSBWet4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzOY6lzIVpAUATM86R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"})