Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In good scenario- people can just engaging in yoga, meditation, spiritual advanc…
ytc_UgxTN5E8u…
G
Your basic assumptions are incorrect, that’s not the way AI nor copyrights work.…
ytc_UgzYQxfVY…
G
Wow, I never considered this. Pretty Deep... although, I believe autonomous veh…
ytc_Ugj6Ag92f…
G
Not gonna lie, autopilot is not as good as I thought it was. There no way Tesla…
ytc_UgwFdAFcL…
G
I asked ChatGPT if, under Massachusetts law, there is an insufficient factual ba…
ytc_UgyRoaCg7…
G
As a software engineer with 10+ years of experience, finishing top of my class a…
ytc_UgxuiwOP1…
G
Him:"Global AI will be distributed across the planet"
Me: Oooo k, welp its been…
ytc_UgzR2GkUc…
G
"Senate Proposal Aims to Establish AI ‘Override’ Button in Healthcare"
--------…
ytc_Ugyt7r5Yf…
Comment
Arguing for AI by saying humans won't build dangerous things because we have "agency" is without question one of the most ridiculous statement I've ever heard uttered. One of the first things humans did when raw ore was discovered was to hammer it into swords and spear tips. AI is the raw ore of the 21st century...the question is not whether or not it will be used to create a weapon (it will), the question is simply whether or not AI is the weapon that will end humanity and possibly all life on Earth. The odds are 99.99999% in favor that a truly sentient AI will see it's human creators as a threat.
youtube
AI Governance
2024-01-18T13:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyreUEyy6bDsfa3NdN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxaWRVCOllZYcVvmQN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgywanQgjQKLMLrS0H14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy7Sh_rbDPPSl0o6Md4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzLhHF6ZDYG1ICN7S54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwJZwpUcLpHC-C0x1l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwLdo5ox4NbEUb8qXV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyg85VDYc8ZQCvtqHt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx7-bNdT3R1vS6cZoF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwGqMRtgXoqtGIqKUV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"}
]