Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
10:05 [ EN ] This was not human error, the operator was given clear and confirmed instructions from his superior and did exactly what he was trained to do. And this is what the situation really looked like: Because of the tense situation, the early warning system was switched to a higher sensitivity (it checked the data less carefully) and recognized several radar bounces as one fast missile. Due to over-sensitivity, it ignored the fact that it failed to detect the rocket's launch and takeoff. The authorities did not admit the error because after the incident the system was shut down for repairs and the US did not have an automatic warning system for some time. They used a “manual” system from the Cold War era which, however, was not effective when it came to the threat from North Korea. [ PL ] To nie był błąd człowieka, operator dostał wyraźne i potwierdzone polecenie od swojego przełożonego i wykonał dokładnie to do czego go szkolono. A tak naprawdę wyglądała sytuacja. Z powodu napiętej sytuacji system wczesnego ostrzegania był przestawiony na wyższą czułość (mniej dokładnie sprawdzał dane) i kilka odbić radarowych rozpoznał jako jedną szybką rakietę. Z powodu zbyt dużej czułości zignorował fakt braku wykrycia odpalenia i startu rakiety. Władze nie przyznały się do błędu ponieważ po tym zdarzeniu system został wyłączony w celu naprawy i USA przez jakiś czas nie miały automatycznego systemu ostrzegania. Używano systemu "ręcznego" z czasów Zimnej Wojny który jednak w kwestii zagrożenia z Koreii Północnej nie był skuteczny.
youtube AI Governance 2024-11-05T18:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzL95_sQPq3_MsBWBx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzioRFd-xgmDmvIlOF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzG55gDZlQ9DRSyW14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgziwFosUCF0bAs0QZ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxWSCEFIAKYJbUxxuJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzOFhRhFX8VCrvt0y54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzfqcyscWTmKRQmwG14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwWjVijDU-pIJ8BFpx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzA8fMp6BQd-QeGY1t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRYHtQRXaBSl5xXqd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]