Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you for your enthusiastic response! If you're intrigued by AI interactions…
ytr_UgwHllwi3…
G
AI must think we are so stupid, which we are. When someone apologizes to me I a…
ytc_UgzU1Xkf4…
G
This dude just said ai are going to be able to sleep? Brother they don’t get tir…
ytc_UgxZGUFNs…
G
I've listened. I hear a few things. You haven't encouraged the public to learn m…
ytc_Ugx9fTtjo…
G
The thing that most people seem to miss (though Geoffrey Hinton alludes to it) i…
ytc_Ugwjgv8G0…
G
I asked it to list 5 connections that humans may have missed.
Here are five pot…
rdc_m2dy5dv
G
I use AI more as an editor/beta reader for my creative writing. I know a lot of …
ytc_UgyGBldkx…
G
I am not sure if Neil thinks about the social consequences of such advancement w…
ytc_Ugy0qyBkg…
Comment
Can't regulate it since we don't have authority over other countries. As with any technology, we cannot allow enemies to get ahead of us on it. If AI becomes a threat to mankind, it doesn't really matter if it starts here of China. What does matter is allowing a country like China to get high-level AI first. I think the real danger in AI is how quickly we will become dependent on it and trust it more than anything else. If someone actively inserts political ideology within it, for example, it would always give political spin answers in psychologically subtle ways and the younger generations will take it as fact without questioning it.
Ai will also give governments and corporations unprecedented control over the masses through monitoring and prediction algorithms.
Edit: I see Elon touched on one of my points at the end.
youtube
AI Governance
2023-04-18T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxj1DwBjj0x-fR194Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwaFsztO9Ys4JNIo0p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3CrdK78igcT8bjQ94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2PLMZw-EdhZrl6Q94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyfe2xLjzyWyzh8YFJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxaP3i0YZChR4NuWHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6tR2U6pOXPayGlnB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzUx893kaex_2F21Nl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy5jefjxtuEuXhkcVh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyBCLPQlNj_e_5ovLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]