Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Broooo, "kill 5 people or kill 1". There's no ethical dilemma here. You pull the…
ytc_UgxN0Sxn-…
G
The next Terminator movie will have the T-GPT model with polemic, diegetic and a…
ytc_UgwjGksqv…
G
The problem is not the machines or AI, the problem is that they are owned by ind…
ytc_Ugz8oFctO…
G
Exploiting Generative Ai for personal gain is a very dangerous game to play even…
ytc_UgwLxLI2T…
G
AI has to be nuetered as to not be too smart. Remember Googles Hiring AI? Shutdo…
ytc_UgyYxOMiR…
G
The real difference to me is not that AI is not a tool, but that it's a tool bas…
ytc_UgzV-MGKn…
G
That's because there's no line of ChatGPT code that's about not telling yo mama …
rdc_jg9hnjt
G
Creating AI robots and making them as smart or smarter than humans is pure madne…
ytc_UgwGEMVVm…
Comment
The parts about artificial general AI were pretty superficial. The point about the code of the AI modifying itself. It could but if the code generated by I.I. is anything to go by it would break itself. Having done so it would be unable to debug itself. AI will rise has high as humans choose to elevate it.
As for UBI, its nature was completely missed by the interviewer and the interviewee. UBI is the new slavery. Worse as the slave owners no longer need most of the labour. The most efficient solution is to eliminate the surplus labour In the words of charlie Munger "Give me the incentive and I will give you the outcome"
youtube
Cross-Cultural
2025-12-22T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgygVmyebOyvUvW74yZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxjh4KVEwk47k_RVs14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZyPvJQ86CsNxgDUF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwg3OxjiOmZtvwRIB54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxsWLBREBy5m_7oeLx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUf42bzpdsdZk9-CV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyHwoDxEWN4J49ZulN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwInWsTkNUGFl-oVe54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwCUMQcvIWsQY2iIrR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRjoMUafkBmaijEg54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]