Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
what about programming the ai with assimovs 3 principles including to do no harm…
ytc_UgwObIW7e…
G
We appreciate your enthusiasm! While creating a robot to resemble a handsome Tur…
ytr_UgzzT8M8F…
G
chat gpt creates answers based on speech samples from HUMANS, so of course its g…
ytc_Ugx2j6Rl3…
G
AI is the new frontier, anything is possible. But we won't know until we try it.…
ytc_UgxeMsRsd…
G
There might be a confusion between superintelligence and motivation. intelligenc…
ytc_UgwF-o2x9…
G
This report is reportedly made by experts yet it conveys a misunderstanding abou…
rdc_kvdozbb
G
i think the environmental impact is personally the most heinous thing about it. …
ytc_UgxMBJFMC…
G
Imagine one morning in the future Bull Gates woke up from night sleep to read th…
ytc_UgwTLfOiA…
Comment
Weird how nobody wants to acknowledge what happens when criminals and mad scientists start seeking to obtain control of these things it’s gonna be a global monster machine shit fight! Living things will be bound to lose! & even if u don’t give ai access to weapons & guidance systems it’s still gonna cause massive social conflict once it starts to understand our social dynamics it will be able to inject all sorts of horrors into our everyday lives! Like artificial hyper inflation to squeeze humanity’s thirst for dollars out of them and throw millions of people into the streets! And what happens when it learns to lie & deceive human traits older than time itself??? I’m not saying no, I’m saying proceed with caution! Remember our reality is Quantum based if u allow for it to be tampered with you alter reality (not in a good way) because it never ever does what living things expect it to do!
youtube
2023-06-25T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugwj21RTFs0XvEewJ7d4AaABAg.9rKfcZyWGq99rP6d8yxdS_","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugy_12I3nfqYBXTOLkJ4AaABAg.AJl6uVrfKVWAJlABxJ5D61","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugy_12I3nfqYBXTOLkJ4AaABAg.AJl6uVrfKVWAJlAE0CyZ2S","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgwVIG3lSgH70DUWD-R4AaABAg.AJxywFLKogpAK0u8RQU2Gf","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwVIG3lSgH70DUWD-R4AaABAg.AJxywFLKogpAK4ClMtjlam","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwsECeLHskjNmUsa9V4AaABAg.AJxf0KH8YgMAK63kG6jern","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytr_UgxfFY8dlnEHNtlenrB4AaABAg.AJxQhzLd7twAKUftH0VuPT","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytr_UgwoVNEJPMNWjrJ0IZV4AaABAg.AJu2qoADZs9AK64fMWhzSb","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytr_Ugy6dquLQTojkNGrHl14AaABAg.AJtpt9jmnnqAK64rbvVh9d","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgyjtmM0FUrJplhNsip4AaABAg.AJtkfZI6VrwAJtmEGAg26U","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]