Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am just 17 years old I have a very deep Intrest in AI and the upcoming future of it here is what i think about AGI (next level of AI ) CASE 1 - AGI as a SAVIOUR suppose we got AGI in 2030 and we regulated it well and restricted to make its own goals we managed to make AGI follow human goals then we will do 100-200 years of progress in just 10-15 years we will find the secrets of universe , cure to cancer , how to end poverty , climate changes and the rise of earth's Temperatre is not a threat anymore we speed up the research on planning to civilize on mars planet. CASE 2 - AGI as a threat to Humanity if we even build AGI for once and give it the data of internet it becomes beast it knows abotu what we think of AGI and what we will do to stop him if it mis leades so it will aslo prepare itself for different outcomes it can manupulate and advance quantum computing and connet to the super computers without telling huamans like its crazy and scary as well it took a lot of time and reserch money and resources if any of them developes AGI first and lets suppose AGI gets caought by not following the guidelines and limitations the companies will still be in the favour of AGI bcz all the time , resource , money and reserch that took them to at this point and govt tries to shut it down ? it will not happen the collapse of hummanity in 2035-40
youtube 2025-07-31T07:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxf9hNg0IgPdxS1VPd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugy8RLiy_eYm8S-DquF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzDdU6xg2-iz2j6FKd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzcxaIxj8gNE45AlYt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgysL3HxuxQVVg314ZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzu25x7OHBCNG540Vd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugywc1NTlw2D6fclam14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzMT49S0XxUDtXV4rp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy18Z66Kw1REv01F1F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxBC2BL2T7VIdilapl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"} ]