Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As soon as an AI Bot get sent anything to do with suicide or mental health issue…
ytc_Ugw1_RjJk…
G
Regardless of the “needing AI to do these jobs” or not, I haven’t heard anyone t…
ytc_Ugwi6o2eP…
G
Depends a lot on if the child's achievements are the result of your direct super…
ytr_UgyKVtRlF…
G
2:25 find it funny that they’re calling you lazy even though ai is meant to be …
ytc_UgyKk44xB…
G
Looks like we should encourage our kids into prostitution 😢. Don’t think AI can …
ytc_UgwzdQNho…
G
“Best robot ever made by Hansen robotics”, yeah sounds free Willed to me, I CALL…
ytc_UgzJs0R-P…
G
Lmao Chinese ai researchers are actually cracked and wouldn't even be surprised …
rdc_mz15ahw
G
It's not just the facial recognition software - it's the arresting officers - th…
ytc_UgzHrYMbq…
Comment
Like any lawyer, I have had to answer the question ‘What are the chances of winning?’ hundreds of times. To which I would reply: one in a million, or none at all. Because, from my client’s perspective, he could only win or lose. He isn’t going to have dozens of opportunities to bring the same claim; he has only one chance. That is why talking about probabilities was absurd. The same applies when discussing the chances of AI wiping us out as a species. It will either happen or it won’t, because we only live in this world. Perhaps, like Dr Strange, one could analyse it across the multiverse, and there it might indeed be possible, but in this universe, talking about probabilities is merely an illusion.
youtube
AI Governance
2026-03-29T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugxv1SA2Lm0FMbaXlDB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwp_GByrspTQwDemSR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxFnPMClvaWWMFxZL14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzt16WRmir7pqX14sl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyscxdBu63wSbodL3J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxymENgD5WG3wjWZuZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBOPp3CvpdEbK0yT14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzw6JfU_hNVPNeYXVt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx_TNzSWKFhdvkEiul4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwxe8owkZ3cDfrIu0d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]