Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@That_StringBean_Sorcerer funny I’m actually an artist who’s been drawing since …
ytr_UgwgV4U-R…
G
Couple this with the advance of automation, which you also made a video on, and …
ytc_UgwchtNer…
G
Crazy to me how youtube put hella AI ads into this video, I hope AI artists lear…
ytc_UgxTLFQ8V…
G
@Karim-ik5ij It's evident that you're not a radiologist because your understandi…
ytr_UgzB5P7qy…
G
The way you picked up on those nuances between AI and human styles was insightfu…
ytc_UgzuhZHuu…
G
Haha, interesting comparison! Sophia here is all about embodying wisdom rather t…
ytr_UgzcXeNdn…
G
What I love as well is how AI bros tend to use the quote “a good artist copies; …
ytc_UgyKRXOOT…
G
"AI Artists" are just customers who've commissioned art from an AI instead of a …
ytc_UgyEyGPbc…
Comment
Firstly ai is an existential risk and beneficence we should focus on that and reduce the other. Secondly there are a number of control systems I can imagine to help mitigate its threats. 3rd Intelligence should only be embodied to the level for it to solve a task. Ai doesn't need to build robots we already did this and anything that is electrical may be able to be influenced with a sufficiently powerful ai. There are already angels and shoggoths in the current system. We can make more angels by very clean and kind data. :) One more thing ;)
youtube
AI Governance
2023-07-09T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzy0RS8rCJsCo4XkwB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJgzi4OkQ7QPapltJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugwu0fayEqNBHovgu2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJXnv95u_j7vvt3Q14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzKWrogoupRqwRe8EZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVZxBgODIUen5Phwl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwaIsXG6vGzkg3o0V14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxF-JUIdpiLbjc_lUx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuzAUM67Dn8MAFwxZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzZBZa5vsqXpN2YZ2t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]