Security of code generated by GitHub Copilot and how to fix it with Glog.ai

Recent research found that 40% of code produced by GitHub Copilot is vulnerable to threats. Security is the focus of the new scholarly paper, titled “An Empirical Cybersecurity Evaluation of GitHub Copilot’s Code Contributions.

The scholarly paper joins another one titled “Evaluating Large Language Models Trained on Code” that studied security along with legal and other implications.

With our Glog project, we have goal to make software more secure. Glog project is focused on research and development of a solution that can give remediation advice for security vulnerabilities in software code based on context. Ultimate goal is auto-remediation of security vulnerabilities in software code. We are developing such a solution based on machine learning and AI. With this agility in software security should become a reality.

Hence, Glog solution is great tool to fix what Copilot did wrong in terms of security.

ResearchGate link. 

Leave a Reply