Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI is going to take most of the jobs then how will people survive, how the MN…
ytc_UgyFqtMfq…
G
I think calling a receptionist to offer AI receptionist service to replace them …
ytc_Ugxdw8CoM…
G
Do people think humans are the pinnacle of intelligence, the peak of what is pos…
ytr_Ugx4Dco6u…
G
I also want to point out that usually we see tech as being moores law, which is …
ytr_UgwAxx4Mh…
G
We must destroy all robots, all robots and AI are a threat to the human race, th…
ytc_UgyPIYMj1…
G
Why only look back at such moments as cell phones and social media. The introduc…
ytc_Ugzyh_yzn…
G
Generating AI art is more like commissioning art and trying to pass it as yours.…
ytc_Ugw_-opgy…
G
May I ask why you work overtime every single day without being compensated for i…
rdc_hmvj60d
Comment
Whoa, whoa, whoa. Did Mike argue that technological progress is inherently good? Whoa.
Technology is a steady 7 on the ethical pH scale. AI could be used by martial forces to secure unethical goals, yet the atomic bomb could blow up a jerk meteor heading for earth. Seemingly good can be used for bad, and vice versa. Hell, 3D printers can print guns.
Technological progress must accompany cultural progress; when we have the right goals, then, and only then, it would unethical not to develop AI.
youtube
2013-06-14T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz5at2qWXko-gJslq54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw54FdmdJoGjVY7w7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFPoz7QKeKlPiHMvx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyX6HSB_4np6z6G0pN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwXilX8rczL5TK0G0h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZI61W3114eCEJTr94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxYAQEqj25JsaHsBu14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBGaAKTgfbnk-e6Kt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwjkZ2eq_Aq0rd5g854AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw9iHHqXlorN8EPMVJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]