Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this isn't a mistake with the AI's.
Its he humans that have made the data sets a…
ytc_Ugy6oe2Vj…
G
NONSENSE! He's misrepresenting the parameters. Even when she asks the question h…
ytc_Ugw_dOMSr…
G
Odd thought, but do you ever feel like we’re the characters in a higher power’s …
ytc_UgykEw_MW…
G
What are trying to prove? Its an AI designed by humans to have conversations to …
ytc_UgzXuetsV…
G
Why would you invite Gary Marcus LMAO he's a joke to everybody in the AI space…
ytc_UgzWyJVHb…
G
> How many times can you FO after FA?
Well, there's also the discovery that …
rdc_oi0h6jk
G
@DoctorBones1
No that's so real tho. I have the personal opinion that AI can be…
ytr_Ugz_KhOqB…
G
I'm also going to get a lot of flack for this since this space is mainly anti-AI…
ytc_UgxhbWDLQ…
Comment
It’s useless, today’s modern AI products can’t really deliver any real change in the way we analyze most data. We need for quantum computing to really take off to make a powerful change with AI. The only thing AI can do far better than humans is in the way it can break down encryption schemes. Most human design encryption schemes are now being dismantled by AI platforms. Soon you will have a Manhattan Project style researcher project sponsored by the letter agencies which will try to produce a powerful encryption and hacking AI platform that can break down even the most sophisticated state encryptions schemes. All those fancy missiles the Chinese think will be flying into aircraft carriers will be exploding upon launch if they manage to create an AI that can hack anything.
youtube
Viral AI Reaction
2024-01-15T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyNoEKSdgcplYpOOZx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgypUEk415ASpLbzlTZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0v4hRBNJmTE9N-rV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzRJMeXeFQ_brKiR6V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz7WClmXj8CUypNbkV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwevkV67URjYQ-djOt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzVclOsXydSueLmSjB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyYOWXJCeDOrKrUhhJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzIKrcCT-aepiFk7IJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyU_QJuleZzqRwQWWp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"disapproval"}
]