Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm a former software engineer and trained hypnotist. I've been using ChatGPT to produce hypnosis scripts I use to hypnotize people on my YouTube channel with. Where it's at now is that most of the mechanical components of hypnosis (i.e. - text structure, language choice, etc.) are there, but the subtle discretionary elements that depend on the hypnotist's personal (i.e. - human) style just aren't - and I don't believe they ever will be. Whereas the AI produces about 80% of a script that is useful, there's still about 20% of the human effort needed to make it work. As a software engineer, I don't believe AI will get to the point where it can perform 100% of the work, per se, but rather that 99.999999% of the work, AI can automate, and the remaining 0.000001% that only a human can do will be insanely important. So much so, that when (or if) the dust settles on the AI singularity, the vast majority of tasks we perform now will be performed by AI, but the very small tasks that AI can't, will be as important as all the those tasks we do now (that AI will do in the future) combined. Here's an example. A typical software development team may have 3-4 developers, a business analyst, a project manager, a scrum manager , and a quality assurance analyst working 40 hours a week. In the future, the QUANTITY of work that a software development team may perform, thanks to AI, may approximate the amount of work they do in 5 minutes today (with AI doing the rest) - but the QUALITY of the output will be the same. Such is the hope, if the technology doesn't become corrupt and destructive to the point it ends all life on earth.
youtube AI Governance 2023-04-18T05:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxJWySX5OSIEzDFMX94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugxih4D95TnwdVzzFaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugz_-NT-SlibacepDOh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugw_FEghpm5CEMMPi7J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxPBagMdqJq-_Wjvll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugzq5g5q0lpAhQd1fix4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzr8RJ8W6uoVXhIEQl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_Ugwerwv3eV3JlwrvoSt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgzfxN2Im-BeHcpDY0h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyIjsr2aXS2SoghchZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}]