Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
there's a scifi short novel from 40ish years ago about autonomous killer drones …
ytc_Ugx_fUG2G…
G
One thing AI developers and drivers are NOT thinking of when wanna replace peopl…
ytc_Ugw-A_Nqc…
G
@jeremymcmahon9546 "... if you think AI isn't going to have profound effects..."…
ytr_UgzzcioOv…
G
@LizzyGromova I use my brain to orchestrate several of the world's most intelli…
ytr_Ugwmsb6Cc…
G
Good thing self driving cars are being developed by engineers and not philosophe…
rdc_cymq3pm
G
Calling it copilot and being more open and realistic about it's capabilities wou…
ytc_UgwzXiNa1…
G
I enjoy the optimism, but as someone who has been quite successful in this indus…
ytc_Ugx6qF--Y…
G
Allways told chatgpt to pretend it was the world leading expert in whatever topi…
ytc_Ugx5p4Y0l…
Comment
The problem with stopping or restricting AI; is that this will hinder said "side's" progress. Which will in turn give the opposition a lead towards their AI model reaching the "point of no return". Without solving the seemingly impossible issue of worldwide governance (cease ai progress, monitor and police every single possible covert advancement operation & related BEFORE someone somewhere gets that breakthrough) the best option is to go FULL STEAM AHEAD and let our AI models learn as fast as possible in the hopes of hitting that breakthrough first.... and then HOPING they/we can control it even enough to not have it become an ALL AROUND COMMON ENEMY.
youtube
AI Governance
2025-06-16T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyAiTOedrBS8WNTDGd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqLOJHMpGxwaQbFtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyY04CCzB8EuCV5_bF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTvpJrg-VRAsZ6zpJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyf_7ygdN7dVADAw6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKQ8402Egi5bDRRfF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz8_ThM8byOBjplkQR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwfkGfYHhmjfE6sTPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzB8GwtjR1rjEJbOhR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnAFwiAX2Nn3_VMhV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]