Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@endlesslearning26 this time it will be different... within the next 50yrs AI wi…
ytr_Ugws_QYmH…
G
Elon there, being the living embodiment of Jeff Goldblum in Jurassic Park. "your…
ytc_UgwZkNV65…
G
i still feel morve comfortable driving next to Waymo than some of these road-ran…
ytc_UgzoMfYXR…
G
Unpopular opinion, but I really don’t prefer the AI vids that they do. Go back t…
ytc_UgwA5F4HF…
G
When AI is used as a tool to assist art, like giving you a basic outline you can…
ytc_UgyX_feKn…
G
I say as a developer, we are the first being laid off and automated out of work.…
ytr_Ugwj4XnyF…
G
This guy is obviously smart. However, it's insane to me that he has identified t…
ytc_UgxrImGkc…
G
For now, try some alternative models, Vicuna or OpenAssistant, ...
What you are…
rdc_jg75s2w
Comment
After listening to several experts talk about the dangers of AI, I still don't have a full picture of how a "civilization ending" scenario would look like. They simply won't detail that claim. I mean, creating a super articulate and convincing system which manipulates you into jumping off a roof or tells you fake news 24h or makes you leave school, your partner or your family is sure bad and it should be put in check, but civilization ending? I would honestly let the language model progress, maybe add more disclaimers warning people about the risks, and simply regulate how and where these super smart brains can be installed (no automated weapon systems, no Black Mirror murderous moving robots, etc.). Wouldn't that be enough? Or am I possibly too naive?
youtube
AI Governance
2023-04-18T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxd7W921BfAiqqn_X54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzTfFQZ5y42fCy5y8R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzk6oWxOoFX6nEHaHN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2R_WZqhidFaV8rS14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy52cI15FZ47jbqQNN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzyUFh8ooKQT3mrTi14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNAbd8K9PLBM9GKu14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy7tU1u8EOQ0ERt7iB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyFx6fMRiynIAwEXLF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxMIqpve1Y6NBpVT_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]