Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Killer Robots must not cross the line from fiction to feasibility in a war tomor…
ytc_Ugzng8rJi…
G
I see that video is about an year old now in June 2025. It might be interesting …
ytc_UgyGKpDmW…
G
South Korea can't really be described as a part of the "developing world". It i…
rdc_dv01836
G
There are going to be millions & millions of sick people in the years ahead in t…
ytc_UgxXwGWHF…
G
Any invention is named on the basis of its work and if scientists make any inven…
ytc_Ugy5Zpl3e…
G
Yes, a.i. gets “smarter”. Can’t you argue that it was designed to do so, and if …
ytc_UgzzpWXR2…
G
Why do we need AI exactly?
We've been fine for thousands of years and now we n…
ytc_Ugx54ElM-…
G
The very thought of this topic really alarms me, particularly when you consider …
ytc_Ugzi3iQng…
Comment
A zero-sum mentality is the belief that in any given situation, one person’s (AI assisted) gain must come at someone else’s loss. It’s like imagining life as a pie: if someone takes a bigger slice, there’s automatically less for everyone else. This mindset assumes resource, whether money, success, love, or opportunity are fixed and finite. But here’s the twist: most of life isn’t zero-sum. Many situations are non-zero-sum, where collaboration creates more value for everyone, those who own the AI and robots need to depend on the rest of us even at the base level of needing someone to buy their products and services.
youtube
AI Jobs
2025-07-01T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw_7aD-HYC4dy-kDZp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyrkg3o-EdGo4_hIkZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8p20Nf26RF99rTPd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwN0EZuT45Gk2mSs054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOD_mQsKN__kvHZpV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz6iRgQzPTD6656QBV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxpfCARrKkQmqDEudV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFT7rtvnhO2IQofN14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxk8-Xw0vSORk4vLR14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwaZ99WgnlM1KgzZIh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]