Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean, I bet there some people that could read it, memorize the same sections a…
ytc_UgwkKAwwS…
G
I suppose we could have automated driving which is as bad as a human driving to …
ytc_Ugw8IPKUT…
G
Give this some thought.
A wealthy person knows what’s gonna happen with regards…
ytc_UgwxyjJgB…
G
Just like the rise of e-commerce in late 1990s we have winners and losers and ev…
ytc_Ugz7xJYir…
G
Taught for almost 25 years in North America and across Asia. Too many students c…
ytc_UgyIKXMhr…
G
If you want to vent to an AI then host and run it locally on your computer using…
ytc_UgyepJxiW…
G
Man sometimes i feel bad for chatgpt bit sometimes i get pissed off at it…
ytc_UgyYngjFn…
G
> Why would you work as hard as....
Not necessarily. I don't actually work a…
rdc_dv0slrs
Comment
I've known Roman (a bit) since 2010 (Lugano, Switzerland, AGI conference), back when he believed that perhaps he/we could come up with a solution/strategy for AI safety through relatively traditional research in a way that could apply to superintelligence. He discusses a lot of topics here and is very much worth listening to.
youtube
AI Governance
2025-10-04T23:4…
♥ 48
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwVv86zBUBPj2N__HN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyaYV-52c9FtlXHESV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy8KYA4e0d3F4Xh8014AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnT7yjiBYdtLmXF5V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwuNH22lQYCIWDiDIx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzfN-Aj2OOgzSjKZLd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzJTG49LFD9C0vtA9h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzc0PTN6ChVsp6FFnB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw24im24lmOEMxSMsB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxhhZXti42riS0PZLx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]