Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
5:48 the point about attacking basic research is you really wont feel it for 10 ,15 , 20 years from now. Ai at current setting cant take over, take over what when its in servers playing with data. Taking over is accessing electric grid among such hands on devices and go rough. Remember when the very tool access this hands on devices only to make them more efficient isn't considered taking over. She can take so much data only to synchronize this data to efficiently run them. Who want to control a light bulb, when you are used to manual switch the bulb you can say you are in control for you can turn it on and off, now come in chip on the bulb switch and you don't have to switch on and off manually but with a command you can still do that. With Ai she will know when you want the bulb on or off. And does that on your behalf, now that to all the devices on her grid. That is immense when we factor in all the devices, imagine the entire planet bulbs, and that's only planet bulb. For AI imagine internet of things, pretty huge to include power plants, factories, farms, teaching, health engineering the planet etc, all this and someone want to control what. Even from data perspective alone we wont come close to grasp what we want to control and how. When most people don't have to work as we automate, then the very claim want to control hahaha control what? Just like when we drive we can say we are in control right? i don't think so. while driving you are following the tarmac line and you are not in control, simply following sequence predetermined already as you transition from point A to point B. Same case machine will be running our system to optimise on them, while doing this at speed of light, who will ever want to be bothered running respective sequence of events in there brain and only to come short in comparison with Ai counterpart . If you cant make me understand, am sure ever Ai will wonder control what. Remember there are internet of things being optimised while others custom per respective individual interest. Unless you say you want a back door where you will instruct it to be biases per respective interest. And that isn't control, that is manipulation. Control implies governance and understanding of the system’s functioning. This current one is data accessed not grid accessed to optimise respective in relation to economy. Lets not forget we can interrogate this algorithm to explain their action as we understand its functionality to the core. By that we can reason with to have respective parameter adjusted as we sequence the outcomes. How many people will have that capacity and time to do that, very few more so when there time isn't paid. And that isn't the case, the case is you run all of that only to end up with negative outcome. By that how are you in control, therefore if your test improved the entire ecosystem then we can say you where in control and we would need more of that to make the system optimal. Say i have electricity from solar, hydropower, geothermal, coal, biogas etc, when all of this output is 180% capacity. Optimal of 100% will make use of cheaper like solar during the day and turn on coal power plant as the last result. If we cant reason with Ai in that respective then we would have lost control and she must prove her action in relation to said.. ...............As of now this is how you control AI, you reason with it to optimise certain parameters. .
youtube AI Jobs 2025-11-03T22:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzL8BnqWPMy4F_COuJ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwcYM_hdf14aWdjq4l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzfKdBt03bfY3Bc7SN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx_taxPwQxzT8euY8d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy5aL5BHy9wr_2WS_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxn5XHP-X_e-N-CqLR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzsCO1J5R2u-iTfQIR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzucP9KawePnGzxl6J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfPpA6MIG0b9K7Dut4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5H1S4cK1bApLV9vB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"} ]