Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's an interesting perspective! The dialogue highlights the balance between w…
ytr_UgxtyUqCH…
G
I think it's worse than ugly. It's boring. Every AI image has the same sort of l…
ytc_Ugx8FNtFw…
G
No, you think too small.
No one knows how to build an AI that is capable of mat…
rdc_jsxly36
G
Welcome America to AI -- the key is how humans can remain in the game. With mor…
ytc_UgyD_DcoJ…
G
At my former company they were all about AI and making sure the employees where …
ytc_UgyeyE6o3…
G
The very purpose of A.I. is to wipe-out the working class. And to surveil us int…
ytc_Ugy88WqNp…
G
Good thing Apple ai is not as intrusive as this pixel or Microsoft ai, so glad I…
ytc_UgyzgiLDG…
G
Its an AI enhancer, which means that it's smoothing them out, and it accidently …
ytc_UgzDlz-_s…
Comment
There's insufficient regulation in place now to protect humanity from the developments of AI because AI is already putting white collar workers out of jobs NOW! As AI capacity increases how do you think that's going to affect people in all sectors? Having an AI doctor that can diagnose cancer faster is going to be redundant if it puts human doctors and others out of work in the process there needs to be an extensive discussion on the scope of AI and serious humane decisions and rules set up to weigh up positive and negative outcomes. The open letter on AI from 2015 is more relevant now when considering that some of the things predicted by its signatories have already occurred in the past decade.
youtube
Cross-Cultural
2025-07-05T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyFeHy9G4sfz1iooxF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwjjgZycDlFTL-G7gN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyxN8ssJyMpakRuJUx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwDxueJK1r1xwevsx14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwKrnWVm5ymrw51Bdh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzSfrZOWvafB-9VtdB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw0kSFItRftBZ8bY2J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzuopg50LtiISXO8SN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgymbPMU0qQOORQ9bQB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwTQkIKO-5Fk3S7yKl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]