Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is a smart ai, no doubt. I like someone's idea of a test,
"Why do you like b…
ytc_UgzrDds4U…
G
Excellent question! As more and more work goes into AI/ML entities, these are c…
ytr_Ugxkz-UqN…
G
godfather of ai
so basically the person whos family needs blamed when everything…
ytc_UgxXW_XOz…
G
I wonder if this is CGI or a for real cuz if so that is scary as hell a robot ca…
ytc_UgzhpMTYh…
G
I had to rewrite this as I actually didn't completely understand the difference …
ytr_Ugx36kc6X…
G
I called Verizon about a payment issue and went into AI hell. It just kept loopi…
ytc_Ugz7zoegg…
G
Let me put it this way, I wouldn't listen to Slipknot if it were made by ai even…
ytc_Ugw-k-KYd…
G
Ok i already see a problem back then and now today. The one thing restricting ai…
ytc_Ugw5C1PKZ…
Comment
what the people neglecting the dangers don't get in general is that AI doesn't have to have its own will, it's enough if it gets taught to emulate it. if no one can tell the difference, there is no difference. and we're already close to that with a relatively primitive system like gpt4.
youtube
AI Governance
2023-07-09T16:1…
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzy0RS8rCJsCo4XkwB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJgzi4OkQ7QPapltJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugwu0fayEqNBHovgu2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJXnv95u_j7vvt3Q14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzKWrogoupRqwRe8EZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVZxBgODIUen5Phwl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwaIsXG6vGzkg3o0V14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxF-JUIdpiLbjc_lUx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuzAUM67Dn8MAFwxZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzZBZa5vsqXpN2YZ2t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]