Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If A I was a "solution" in any way A I would have acknowledged "solutions" to th…
ytc_UgzV53NKR…
G
AI wont be there at 2030 and we will survive and your AI wont
you will see…
ytc_UgzIBzX04…
G
If we all go extinct than our AI likelynesses can chat with each other till the …
rdc_o64goy7
G
Yes I was thinking actually the same.. but I suppose is difficult to explain nor…
ytr_Ugx1Ks00i…
G
I work in social services as a crisis worker, tell me how AI is going to help a …
ytc_UgycYo1xv…
G
Chat gpt is entirely forthcoming with me. I don't question in the manner of this…
ytc_UgzFzIzmM…
G
At Hirschbach & Co, we’re proud to announce that by fully automating our truck f…
ytc_UgyAxwyb8…
G
Well, we (humans) are the dinosaurs now...we're on the way out...one day (in the…
ytc_UgzDrUUHn…
Comment
Mr. Hinton is concerned about the future of AI wiping out humanity, and Elon Musk has the very same concern. Is it really too late to change the destructive course of AI? What can we try? If humans start using AI as a help instead of the answer , could that help to change the outcome to a more positive outcome?
youtube
AI Governance
2025-06-26T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyzMO6Yav3xEoh8Y754AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwuXjB61tiFqDj-cvB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBg3tYIm4IP9olICN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyu1ZKTpimcgSE82iV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxTtYMkkDAgRQEyYeJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdYx05LDk7Ut9RukV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwer29hRPUpLEhIyFx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwp0Sxv_mB55NaANXB4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxV5zprxMKIKoVv1rh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwbw9ms4SgCmlVuuEB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]