Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, ai is never going to come up with inverse square root because it wasn’t co…
ytc_Ugyqbk0Ux…
G
With the recent dlc drop with SEAL— RSI is inevitable. The timeline for AGI and …
ytc_UgxL5zzp4…
G
What are gov going to do when robots become popular have WiFi and can be control…
ytc_UgwSDuWrx…
G
People who think that AI shouldn’t make Art are more stupid then I thought.
You…
ytc_UgztfQU5p…
G
Also in addition to this, I think this guest is a jaded employee of Open AI that…
ytr_Ugxtns9pq…
G
Africa could not have power plants because the copper wires kept getting stolen …
ytc_Ugw6FF2tv…
G
These interactions can seriously affect people whether it's about AI or not can …
ytr_Ugz0XdX1w…
G
AI CEO raises alarm over the safety of AI systems. His solution? Throw more mone…
ytc_UgyVWND0R…
Comment
He also said you shouldn't just let anyone build AI meaning everybody else. I think someone like Elon respects the danger of it enough where if I'm going to pick somebody I would rather pick that he builds AI because of the respect for its capacity to be so dangerous he would likely put in more safeguards than other people or companies that are not aware enough or care enough to be as cautious
youtube
AI Governance
2024-03-21T00:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxJLuV-ZcpbNyL1Lrp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzLZUWKGCVZbJFfaZR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkbGIRNWGRIpeWoM94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzAhGY3xh-8HvppYod4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwEj5a0Ay6brmC7D8R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzabMyR4851I8f9V0B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxiJRzygUow3j_qHXp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzeRGqZ2tS47VR1lA54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyn26gSTEaHJvjMZP94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxrQqfZkC2wehg1nzt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"approval"}
]