Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Question, he asked what do we think would happen if a neuron in your brain was r…
ytc_UgwRAUFbz…
G
I was asking help about my relationship from Chatgpt.Thnak god i didn't listen i…
ytc_UgwTPnka_…
G
Guys WALL-E was fantasy. In reality if AI takes all the jobs there will be such …
ytc_UgwKpuEkp…
G
there are millions of professionals got laid off due to AI as for now. After the…
ytc_UgwOPLX8F…
G
I’m an incoming 11th grader and I was set on majoring in computer science but no…
ytc_UgxPinI3e…
G
All jobs cannot be automated.
The electrical infrastructure to charge all the r…
ytc_UgyeIhWjl…
G
When the Homestead Act was passed most of the great Plains was known as "The Gre…
rdc_d2y12bq
G
You are so cool I love your art and your voice you are so cool I want to make ar…
ytc_UgxSW7Ia8…
Comment
@greenbeans7573 It will be smarter than us but the reality is there's no reason for it to want to take over humanity. It doesn't have wants and desires, as its not sentient. And even if it accidently tried to destroy us because it misunderstood an objective we gave it, if it was truly super intelligent it would realise that we made a mistake, and propose a correction to our rubbish objective/programming. And even if AI wanted to destroy us, it couldn't because it can't infiltrate different networks and hardware, our systems simply aren't that interconnected. And lastly, can a evil human use AI to do bad, yes, but can they pose an existential risk with it, no!
youtube
AI Governance
2023-07-11T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytr_UgzZBZa5vsqXpN2YZ2t4AaABAg.9rsOXTlFMvX9sHGTRNB36K","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytr_UgyP32EFA3Y5ktq3NCR4AaABAg.9rq9WbI78bQ9rsKHLjQ-rd","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgyP32EFA3Y5ktq3NCR4AaABAg.9rq9WbI78bQ9rsnnSxiPBn","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytr_Ugx9bhtLneJ2aN4J9xl4AaABAg.9rpjteLMIMZ9t1-RcsIlgQ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytr_Ugx5LT0M-B6vvyirP9Z4AaABAg.9rohS4EIjnX9s0zPcrwSe2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgzhgbL9ssnNPSoPVXN4AaABAg.9rebYMGdY7H9viUlqHIxPn","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"resignation"},{"id":"ytr_UgwyYrot6kYsPsGLlRR4AaABAg.9reGNNANkzS9s-mgyxbLXl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgwyYrot6kYsPsGLlRR4AaABAg.9reGNNANkzS9s2cWYjwYHB","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgxUDcaHQU7hVLpShSp4AaABAg.9rdSkgnSv1F9s3w0HmoPgU","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytr_UgxGaW9p18AEp5IotE94AaABAg.9rcov6TyeMk9sFJ0z6J2yF","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]