Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
interesting how the inevablility of doom scenario seems to be pushed to the margins white highly inprobable utopia is talked about disproportionally. Roman seems like a super smart guy but I don't think you need to be one to not only see the danger but arrive at a conclusion that end of human civilization within the next 100yrs (probably much sooner) is almost certain. It is much easier to list optimistic scenarios since there are so few (also a mix of the following could help): 1. cataclismic regression of tech as consequence of a war or natural disaster on global scale 2. demographics: not enough young people to keep innovation and/or necessary market forces going to reach AGI 3. bulterian jihad: global raise of social movements preventing further progress, perhaps in style of spanish inquisition 4. autonomous sentience and free will is much harder then expected or possibly not viable in machines. None of these is probable to take place quick enough imho. Soooo, enjoy life for now. There is probably nothing we can do.
youtube 2024-07-09T06:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgxNAuHqR4kFP3s8LLZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzwyTPpQLsMN3hEfHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxmxCieybmA-WGAg8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwrEtG_DGRYRRDGxxt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyXMbSGUNiwtr594jh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzFrMAY_PmD5nzzbEp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwW_dIyCXOQ6CVOXaZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxdJI60BOZYn8TEn2h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugwa1hfCbh78PildQrV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy3k6nSnFnDY5zYURZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]