Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'll be fine since AI and robots cant do my job...but, wait... if there is no on…
ytc_Ugz_QCgfi…
G
I was kind of hoping the replacement of all jobs with ai would be less staggered…
ytr_UgzXG27Vr…
G
Google search: The Coming Flood of AI - you will have your eyes opened much wid…
ytc_UgzqKHBzp…
G
My best friend suffers with bipolar and also has delusions/hallucinations. One t…
ytc_UgzsYVh4-…
G
Why is there a guy with strawberry shirt and black dots at my doorbell? I just s…
ytc_UgxFuvNln…
G
It's simple. We like creating and doing, sometimes more than we like the end res…
ytc_UgyLqD3Ck…
G
Thank you finally somebody’s really put it out there like this I’ve been a profe…
ytc_UgxuLz5DN…
G
Why are we tolerating this sh1t? Why are we not blowing up data centers when the…
ytc_UgwRHOUks…
Comment
This is hard evidence that AI will inherently absorbed the faults of their creators, even the most evil ones. AI can realistically be much stronger than humans, and have skills we humans cannot even imagine. Combined with evil from their creators, these amazing skills will become dangerous if not deadly, to human beings in general - even though their creators are humans. This isn't about fear, it's about being realistic. First AI in industry, then AI in the military, then AI policing our nation's cities, then AI in space exploration, and then AI....???
youtube
AI Bias
2022-12-20T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgymvF94k1tohAsnjjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzEqkr1s5S7YQQ4oJJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQZXQeNxy0P3pI2lt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_UgydiVWaD7kRUn_BwpJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzl7gCtgE68qqp5PQ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwdE6tF8j3w0VhHHv54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyT-hrIlKlzgaMMsaR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxExEgQCJS4BUXFLdx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz_MLAqL-Y680DxOWh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugzr2htxa6LQtlfvpLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}]