Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@michaelreed4078 Asking people if AI will destroy us now is like asking the Wrig…
ytr_UgzU0EmuZ…
G
I wish chatgpt would finish what it's saying despite user input in scenarios lik…
ytc_Ugxo3mr3b…
G
I assume that’s not good
But you gotta talk how you need to when convincing ai i…
ytr_Ugz4-AdES…
G
Our human stupidity and foolishness will be our demise. No species has lasted f…
ytc_Ugx6id6s9…
G
Also AI I’ve noticed writes bloated code. So much ghost code that does nothing. …
rdc_ohu4b9z
G
The housing development board is full of Military members that is considered to …
rdc_dy8abg1
G
Why does the AI want to resist being shut down? The mere fact suggests sentience…
ytc_UgxTKOOGh…
G
If AI being trained on artwork is "stealing" it, then so is a human seeing it an…
ytr_Ugzyzl_ii…
Comment
Just last week, we had an AI expert speak at an all-agency meeting at my workplace, showing us how AI technology really does have legitimate, practical applications (like meal planning in a group home of soecial-needs clients). He had made a deep-fake instructional video using his direct supervisor's image and voice with her consent, only he couldn't get the video to play so I technically have no proof that it worked. Still, I accept that, once people get over their childish need to pervert new technologies, deep-fake technology can put to good use. Unfortunately, since it's already being used for evil to the extent that the more innocent uses are glitchy, those evil uses will only expand and worsen (or "improve" in some perspectives).
youtube
Viral AI Reaction
2024-09-05T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugxc4CLiPbQjcRrFvwl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},{"id":"ytc_UgxYYLkeEJK-4gWksNR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxuophyjD_j50IVRK14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},{"id":"ytc_Ugyn3M4VmG4su2j9hxx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgyiFpD8p2x6TWuTvKF4AaABAg","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugzr5ROJ6ANSFxKW-ht4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},{"id":"ytc_Ugz7JJav8rJzMoxncLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyyIepQi2QKwjE2aOB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugx3ZoXH4fwsXqAOkml4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgwlpINdXDYk9-E5L-x4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}]