Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i tried using this on an ai, and yea. it works! it keeps the hands general shape…
ytc_UgwyydveP…
G
After making my own experience creating "AI", I get that publicly available data…
ytc_UgzvEGbJf…
G
I have seen now a couple of these doomsday scenarios- but they continually portr…
ytc_Ugx1DqBlX…
G
AI is soulless and destroys the environment. Either make it yourself or hire a p…
ytr_UgxnEDO8H…
G
Hey chatgpt, using the summary above, please create a long Reddit post that will…
rdc_ktx542i
G
First, you are not alone. Nothing to be ashamed of.
Second, there are many peo…
rdc_js41ml5
G
The majority of the worlds problems are human inflicted. We can't help but bring…
ytc_Ugy9ChcDw…
G
It appears the self-driving Uber car was going fast when it hit that poor woman.…
ytc_Ugzd9D1T2…
Comment
No, those who forecast a specific timeline for AI achieving ASI or proclaim it will eradicate humanity are profoundly mistaken. If asked to clarify, I mean no disrespect, but perhaps consider aligning with more informed perspectives.
youtube
AI Governance
2025-08-13T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzHbyHr8BQmKOJI_8t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyfv3_yck0fEbd-vIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzb1_gtmPpOHb6sXWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxlaLVtkoMqseLSwN94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx6T6j_PG_4hmoZ2x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxmen0r82zywpa0aT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhNbZtrih6h9sxn1Z4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOx40P27mm7BJIWAt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxvatfpCv0Y9hZ4x1t4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxFudW6sfQhYS5ANwx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]