Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This all like a giant magic trick. The robots dont have ANY CHOICE so all their …
ytc_UgwDbq8up…
G
So whats the difference between browser versions of chat gpt? like compare to ed…
ytc_Ugz9rbqcX…
G
Disclaimer: I posted part of this on another thread, so it's essentially leftove…
rdc_lceeu5h
G
You know how we take it out right? destruction of the internet? Offline artific…
ytc_UgwEe6JZV…
G
As these soldier robots were being designed in Japan,they turned on the designer…
ytc_UgyULLs3X…
G
Bilionaire boy why are you afraid? We don’t care if AI is danger or not we livin…
ytc_Ugx185YbU…
G
It’s the little things that look wrong like the way that the bus kicks up dust. …
ytc_UgzT_Md6L…
G
This video is beautifully put. I have always been more of a science nerd more th…
ytc_UgxtrrUGG…
Comment
I have a slight problem with his take on emotions.
Emotions are bio-chemical responses to a physical reality.
People often fail to recognize, that emotions are only a display of what our body and mind wants or needs to do.
And its always twofold, directed in and outside
E.g Anger --> display of perceived wrongness of something (while you OR the other OR the whole situation might be wrong)
-->if anger apply [nothing, destruction of opposing entity, ..... ] so HOW to respond to it, and it's a learned behaviour and varies individually, while the total range of reactions (incl. not responding) is limited and is prioritized based on the learned "weight", historical record of profitable vs unprofitable outcomes based on the reaction in the past, but also environmental context etc.
in the end it all comes down to prioritizing actions that are rationally optimal to a specific goal inside the situation.
--> what brings you tactically forward and supports your strategy (both of which imply a goal, which for humans is not always the case, but is for the machines.)
Speaking about AI we will also have to skip the word "emotions" and exchange it for situational awareness + rational decision making - they will not have emotions as we mean it, even when being agents, but yes they will probably totally have subjective rational decision procecess, that might differ.
Given a situation it might be rational to destroy the other agent to pursue it's own goal, if the other agents goal is disruptive to its own.
Given the other agent is better-stronger, to destroy him is possible, but not via head-on destruction, (that might lead to) deceptive behavior.
Everything is and will be calculated inside the sphere of individual agents computational power.
what I would suggest is to view it as a ... extreme caricature psychopath, that doesn't give a damn about anything external except his goals and uses any means he can for their accomplishment.
Except we might find a way to programm it in a way to NOT be that psychopath.
youtube
AI Governance
2025-06-25T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy1P_s64nxNuoQlO6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzvAt9XKA8-kcQCe1d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxZVd91N5xtdPErOz14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-U-9wKe-l4qHZQud4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgywDRPC6DBfiIdzho54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwfBFqQe2sV-q1kva94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8Ps-fTu_wUQm45Tl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwtVXv97glMJNcRvWt4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzRU_E1nTltAUCqBz94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxBEu_-7h0G9GXjwY94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"}
]