Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do you know how much energy or gallons of water it cost to say please or thank y…
ytc_Ugxw85wqP…
G
Sure, basically make AI do inbreeding experiments on humans i.o.t. create almost…
ytc_Ugw3M3X1J…
G
Little fact, Koreans are a little racist...
...so are the Japanese...
*Edit on…
rdc_clut9i0
G
@TheInstituteOfArtAndIdeas like I have said. We could all have good lives. Supp…
ytr_UgyIQNyu9…
G
As a mother of two kids under 3 years old, technology use in school really scare…
ytc_UgyBKQ8-m…
G
Yes but those advancements were based on perceived improvements in transistor de…
rdc_n7okmgv
G
This case serves as a useful cautionary tale. I've tried asking ChatGPT for stat…
ytc_UgyxDeyyG…
G
Sophia sounds like she's vegan, and ready for a vegan world. The other robot was…
ytc_Ugzuh7Aag…
Comment
Everything we hear about AI is it's bugs.
But wait until scammers with sophisticated social engineering skills start applying themselves to manipulating AI agents to do things they not supposed to do. AI is going to introduce an enormous new attack surface that that no one in the industry seems to even be talking about.
In one case an AI app that translated police bodycam footage into a written report ended up documenting how an officer was transformed into a frog. Apparently their was a Disney film in the background and the AI got confused. But wait until scammers learn how to use techniques like this to deliberately trick AI.
All these tools are being rushef out now because there was been huge investment and they need some return now, even though the technology is clearly nowhere near ready to replace humans.
youtube
AI Jobs
2026-02-05T15:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwJAGJMGbfIvM-4xoZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKgBEgmNjQ9XZta4l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDbgMux-XYZ8gJTdZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTRhcPzdp5hdodyNB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzXeK7Tyq6IVMzhI6t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6_JlFlgX1OoXacvN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0vGZQ1LaMz7TZNOF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy_m8S-PXGeLTm4vhZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxvV71TVVQDxReztaJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzZuQ2ZfvoA0NraHIB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]