Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To fellow AI researchers and students. I am sure you are reacting the very same …
ytc_Ugz5at2qW…
G
My argument against Ai is "FELLAS WITH DIGITAL ART YOU USE YOUR HAND TO DRAW BUT…
ytc_UgzidylCB…
G
I just got my phd in chemistry and have been applying to jobs that require phds …
ytc_UgwIawApy…
G
Yes. Somehow I suspect the least upset person here may be the one whose name is …
rdc_loqby2p
G
People should stop talking to robotics people about this topic. Building a model…
ytc_UgzCh89Od…
G
I think we can agree, an AGI should be capable of novel thought. But an LLM just…
ytc_UgzgsddE9…
G
You should take a look at the Google self driving cars, they are making exactly …
ytc_UgyXty2IB…
G
Looking at how much AI has developed since last year, these night shades are doi…
ytr_UgwiBINzK…
Comment
The simple fact is, future problems with A.I. are unknowable. Nothing like it has ever existed before. Therefore, any legislation meant to constrain it will be inadequate. True artificial general intelligence will be more intelligent than every person who has ever existed, combined into a single entity... and every superpower is currently racing to get their own version of AGI up and running as soon as possible.
youtube
AI Governance
2023-04-18T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx7akKrpJziuRgu9l94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxHP1mOKVoG_43qfPJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-SfUZKcylMy7hhX54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwnReqtl6hksicZLVl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugyn6i4MSEcOHlciy414AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwNGoz9pLE-jCM3JL14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzQUPOodNBZTN9DLdp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyeFOGj_3SK1qRo3SV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzLPWjZJCrVnG97MEV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwNaFVudmM08oPX6BJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]