OpenAI CodexAI the future of coding?
Moderators: FourthWorld, heatherlaine, Klaus, kevinmiller, robinmiller
OpenAI CodexAI the future of coding?
I came across this today, and l was amazed at how far ai for coding has progressed.
AI system that translates natural language to code.
https://openai.com/
https://youtu.be/SGUCcjHTmGY
Watch the video, pretty amazing. It's not hard to imagine that this type of technology will become how most code will be constructed in the not too distant future.
AI system that translates natural language to code.
https://openai.com/
https://youtu.be/SGUCcjHTmGY
Watch the video, pretty amazing. It's not hard to imagine that this type of technology will become how most code will be constructed in the not too distant future.
Andy Piddock
https://livecode1001.blogspot.com Built with LiveCode
https://github.com/AndyPiddock/TinyIDE Mini IDE alternative
https://github.com/AndyPiddock/Seth Editor color theming
http://livecodeshare.runrev.com/stack/897/ LiveCode-Multi-Search
https://livecode1001.blogspot.com Built with LiveCode
https://github.com/AndyPiddock/TinyIDE Mini IDE alternative
https://github.com/AndyPiddock/Seth Editor color theming
http://livecodeshare.runrev.com/stack/897/ LiveCode-Multi-Search
Re: OpenAI CodexAI the future of coding?
The rebellion of the machines is closer than we think.
GitHub joined forces with OpenAI to create copilot. Brilliant
https://copilot.github.com/
GitHub joined forces with OpenAI to create copilot. Brilliant
https://copilot.github.com/
-
- VIP Livecode Opensource Backer
- Posts: 9802
- Joined: Sat Apr 08, 2006 7:05 am
- Location: Los Angeles
- Contact:
Re: OpenAI CodexAI the future of coding?
Dissenting views, on efficacy:
https://hackaday.com/2021/08/02/github- ... ce-future/
...business ethics:
https://m.slashdot.org/story/376272
...and licensing legality:
https://www.infoworld.com/article/36273 ... ation.html
https://hackaday.com/2021/08/02/github- ... ce-future/
...business ethics:
https://m.slashdot.org/story/376272
...and licensing legality:
https://www.infoworld.com/article/36273 ... ation.html
Richard Gaskin
LiveCode development, training, and consulting services: Fourth World Systems
LiveCode Group on Facebook
LiveCode Group on LinkedIn
LiveCode development, training, and consulting services: Fourth World Systems
LiveCode Group on Facebook
LiveCode Group on LinkedIn
Re: OpenAI CodexAI the future of coding?
Richard, thank you for these links, very thought provoking. I found the comments on licensing legality in the InfoWorld article particularly engaging. So as I understand, copilot is in effect a Microsoft controlled product that scans GitHub repositories for code snippets and then uses the found snippets to feed the OpenAI’s GPT-3 natural language model to 'learn'. This can then be used by a developer to code complete as long as you are using a Microsoft code editor and that the developer has been given access to Microsofts OpenAI API via the editor.
So .. this is a Microsoft SAS product, because it must be used in a Microsoft code editor, which uses others code (which as I understand it, can not be opted out of) that has not in most cases been unit tested or validated, and that has no respect for the licencing that was attributed to the code it scans.
This is a closed proprietary system. Calling it OpenAI is disingenuous and misleading at best.
So .. this is a Microsoft SAS product, because it must be used in a Microsoft code editor, which uses others code (which as I understand it, can not be opted out of) that has not in most cases been unit tested or validated, and that has no respect for the licencing that was attributed to the code it scans.
This is a closed proprietary system. Calling it OpenAI is disingenuous and misleading at best.
Andy Piddock
https://livecode1001.blogspot.com Built with LiveCode
https://github.com/AndyPiddock/TinyIDE Mini IDE alternative
https://github.com/AndyPiddock/Seth Editor color theming
http://livecodeshare.runrev.com/stack/897/ LiveCode-Multi-Search
https://livecode1001.blogspot.com Built with LiveCode
https://github.com/AndyPiddock/TinyIDE Mini IDE alternative
https://github.com/AndyPiddock/Seth Editor color theming
http://livecodeshare.runrev.com/stack/897/ LiveCode-Multi-Search
-
- VIP Livecode Opensource Backer
- Posts: 9802
- Joined: Sat Apr 08, 2006 7:05 am
- Location: Los Angeles
- Contact:
Re: OpenAI CodexAI the future of coding?
Yeah, just when I was beginning to really like Microsoft. I still think Nadella is a great CEO, but many aspects of this project seem almost like something Balmer would do.
Richard Gaskin
LiveCode development, training, and consulting services: Fourth World Systems
LiveCode Group on Facebook
LiveCode Group on LinkedIn
LiveCode development, training, and consulting services: Fourth World Systems
LiveCode Group on Facebook
LiveCode Group on LinkedIn
Re: OpenAI CodexAI the future of coding?
On a related note, having worked with medical AI projects the sentiment is universally the same; it relies on gaining access to medical data that is often done with dodgy deals behind closed doors, with no informed consent from patients or treating physicians and seems to be driven by Big Money.
It is often associated with an air of not-quite-illegal-but-of-dubious-morality.
And the projects I've seen aren't even fit for purpose.
As an example an AI driven service was tested last year for interpretation of CT scans of brains, the purpose of which was automated detection of bleeding in the brain. Now CT scans are very good inasmuch as they have very high signal-to-noise ratio and bleeding is an easy diagnosis to make visually (the premise here is that it would provide automated diagnosis out-of-hours). Only it turned out that the service rejected 34% of scans as unusable (when they weren't when reviewed by a human), and of the remaining scans it got the diagnosis right in only 71% of cases - completely unacceptable of course.
I'm currently dealing with a 3rd party company that has managed to get a nation-wide grant to 'onboard' hospitals with expertise in advanced cardiac quantification, the premise here is that heart scans are sent to them and returned within a few minutes with a fairly advanced quantification, and their selling point is that analyses are very reproducible (ie no significant variability on re-analysing the same scan, which is a key metric. Best-in-class software has a variability of 2-4%, but requires signficant expertise and user interaction).
I've yet to see this is practice, but the data the company quote is literally 0% variability from interpretation and re-interpretation of the same scans which I'm extremely sceptical about, as these scans are ultrasound with much lower signal-to-noise ratio and which contain a huge amount of data (effectively contains 80-100 cardiac scans per second, for at least 2-3 seconds; and as these scans are real-time, there is significant beat-to-beat variability with breathing, movement etc.
I can't help but feel it's a bit of a scam, but reserving judgement for now.
Universally my experience is that this type of platform is championed by those who a) are in higher levels of management (no need for pesky and expensive workforce now, is there!) and b) have no detailed knowledge... but that's just my mileage so far
It is often associated with an air of not-quite-illegal-but-of-dubious-morality.
And the projects I've seen aren't even fit for purpose.
As an example an AI driven service was tested last year for interpretation of CT scans of brains, the purpose of which was automated detection of bleeding in the brain. Now CT scans are very good inasmuch as they have very high signal-to-noise ratio and bleeding is an easy diagnosis to make visually (the premise here is that it would provide automated diagnosis out-of-hours). Only it turned out that the service rejected 34% of scans as unusable (when they weren't when reviewed by a human), and of the remaining scans it got the diagnosis right in only 71% of cases - completely unacceptable of course.
I'm currently dealing with a 3rd party company that has managed to get a nation-wide grant to 'onboard' hospitals with expertise in advanced cardiac quantification, the premise here is that heart scans are sent to them and returned within a few minutes with a fairly advanced quantification, and their selling point is that analyses are very reproducible (ie no significant variability on re-analysing the same scan, which is a key metric. Best-in-class software has a variability of 2-4%, but requires signficant expertise and user interaction).
I've yet to see this is practice, but the data the company quote is literally 0% variability from interpretation and re-interpretation of the same scans which I'm extremely sceptical about, as these scans are ultrasound with much lower signal-to-noise ratio and which contain a huge amount of data (effectively contains 80-100 cardiac scans per second, for at least 2-3 seconds; and as these scans are real-time, there is significant beat-to-beat variability with breathing, movement etc.
I can't help but feel it's a bit of a scam, but reserving judgement for now.
Universally my experience is that this type of platform is championed by those who a) are in higher levels of management (no need for pesky and expensive workforce now, is there!) and b) have no detailed knowledge... but that's just my mileage so far
Re: OpenAI CodexAI the future of coding?
Stam, very interesting insight to real life uses and problems with AI.
A couple of questions:
1. If a patient that has had been given a negative result by an AI system and then unfortunately dies from lack of treatment, who then becomes ultimately responsible for the decision error?
2. Are all AI tests backed up with a human check or just cases where the results are inconclusive?
A couple of questions:
1. If a patient that has had been given a negative result by an AI system and then unfortunately dies from lack of treatment, who then becomes ultimately responsible for the decision error?
2. Are all AI tests backed up with a human check or just cases where the results are inconclusive?
Andy Piddock
https://livecode1001.blogspot.com Built with LiveCode
https://github.com/AndyPiddock/TinyIDE Mini IDE alternative
https://github.com/AndyPiddock/Seth Editor color theming
http://livecodeshare.runrev.com/stack/897/ LiveCode-Multi-Search
https://livecode1001.blogspot.com Built with LiveCode
https://github.com/AndyPiddock/TinyIDE Mini IDE alternative
https://github.com/AndyPiddock/Seth Editor color theming
http://livecodeshare.runrev.com/stack/897/ LiveCode-Multi-Search
Re: OpenAI CodexAI the future of coding?
Hi Andy
Needless to say no one trusts automated systems; where used or tested, these do not replace the validated workflows but are done alongside them (hence 'tested'), and the clinical decision making is always done based on well tested and validated methods. No test is 100% accurate (that's physiologically impossible), but at least with the validated methods we are aware of the pitfalls.
The responsibility will always lie with where the clinical responsibility lies; that is to say the treating physician. Hence no one actually uses these systems or at least only these systems - there is a very long way until clinical trust can be built up to the extent that such trust would be given automatically.
But there is increasing pressure to consider AI based solutions. This has involved rather underhanded tactics from companies with deep pockets (quite big names), who in no subtle terms have given financial recompense to people in positions in power to get access to data to feed the machine learning algorithms, and are who are of course keen to capitalise on the market by bringing this to clinical practice.
This type of concern is always a major issue where projects are promoted and funded by big financial interests instead of being driven completely by the scientific community - there is a constant clash there.
Of the various solutions i've tried, the most accurate systems are those that are aided by machine learning but where a skilled/experienced reader guides analyses, rather than have a fully 'AI-driven' solution. A big caveat here is that some cheeky companies actually design software to give the impression to the reader that then can influence the process when in fact the result will be the same no matter what they do. I'm constantly teaching juniors and technicians about the pitfalls on this on a major brand of cardiac ultrasound software that does exactly that, but so few people even realise this...
Needless to say no one trusts automated systems; where used or tested, these do not replace the validated workflows but are done alongside them (hence 'tested'), and the clinical decision making is always done based on well tested and validated methods. No test is 100% accurate (that's physiologically impossible), but at least with the validated methods we are aware of the pitfalls.
The responsibility will always lie with where the clinical responsibility lies; that is to say the treating physician. Hence no one actually uses these systems or at least only these systems - there is a very long way until clinical trust can be built up to the extent that such trust would be given automatically.
But there is increasing pressure to consider AI based solutions. This has involved rather underhanded tactics from companies with deep pockets (quite big names), who in no subtle terms have given financial recompense to people in positions in power to get access to data to feed the machine learning algorithms, and are who are of course keen to capitalise on the market by bringing this to clinical practice.
This type of concern is always a major issue where projects are promoted and funded by big financial interests instead of being driven completely by the scientific community - there is a constant clash there.
Of the various solutions i've tried, the most accurate systems are those that are aided by machine learning but where a skilled/experienced reader guides analyses, rather than have a fully 'AI-driven' solution. A big caveat here is that some cheeky companies actually design software to give the impression to the reader that then can influence the process when in fact the result will be the same no matter what they do. I'm constantly teaching juniors and technicians about the pitfalls on this on a major brand of cardiac ultrasound software that does exactly that, but so few people even realise this...