Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interesting, but there are a few caveats. AI still ultimately requires human inp…
ytc_UgzBM_Ojr…
G
@tonywilson4713 Dude, it's already here.
Have you not seen Boston Dynamics, fo…
ytr_UgwH37eqi…
G
Sal, all those lessons listening to a very pleasant voice, and I recognized your…
ytc_UgwvdOIRl…
G
LilyJay... 😂... So stupid.. Are you trying to say ChatGPT decision on whether Je…
ytc_Ugy9p1t96…
G
So many people jumping on the Tesla bandwagon. We'll have self-driving cars in …
ytc_Ugw3hys9f…
G
@pin65371 No it’s not. Hi used to tutor students in math and I remember seeing h…
ytr_UgwxAFThF…
G
I think the point here is, Anthropic was in dificult conversations with the pent…
ytc_UgwEKyT-Z…
G
The only reason I'm "against" copyright (especially in its current form) is beca…
ytc_UgwARQrAv…
Comment
Love to see this conversation starting to play out on a larger scale. I think the important thing is to not trivialize facts or rush to conclusions. Personally, do I believe LLMs pose a mortal danger to humanity? No, I think they are too constrained by their training distribution. But it is important to distinguish that super-duper AI isn't necessarily an LLM. We are just scratching the surface with what these datacenters are capable of. And it is true that the companies rushing forward are barely paying lip service to the valid concern of harm. As Nate said, 5 years ago the machines weren't talking.
youtube
AI Moral Status
2025-10-30T20:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyxUlZ1U3WDzWd6NA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjC4ynLPj748PRgNJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgydX9KsXkvPOd_CNVt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyJyFcYmflsqfeWYNh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugz1e5I1tkoZ41iRjf14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxOyOUos_8xSp2pq8d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgywYasgpXCF0OXUODR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzyP4FR3gM33-qNFYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJgI4op1Lq_OxmJm14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzt7NEvame2ldE72X14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]