Open AI releases a new AI model it claims can reason like a human: Hashtag Trending for Friday the 13th of September, 2024

Share post:

Open AI releases its new AI model it claims can do complex reasoning,
Canada Emerges as Leader in Ethical AI Adoption for Accounting
Meta Confirms Public Facebook and Instagram Data Used to Train AI Models Since 2007
And what do you do if you pay the ransom, but don’t get your files back?

All this and more on the “Reason to believe” edition of Hashtag Trending. I’m your host, Jim Love. Let’s get into it.

It appears that Jimmy Apples, who has become famous for leaking what is happening at OpenAI has been right again.

As he predicted, OpenAI has just released a groundbreaking new AI model called o1, previously code-named Strawberry. This model represents a significant leap forward in AI capabilities, particularly in complex reasoning tasks.

What sets o1 apart is its ability to evaluate its steps before proceeding, much like a human would. OpenAI says this approach allows the model to “spend more time thinking through problems before they respond,” leading to more accurate and thoughtful outputs.

In testing, o1 has shown remarkable prowess in challenging fields. It performs on par with PhD students in physics, chemistry, and biology benchmarks. Even more impressively, it scored 83% on a qualifying exam for the International Mathematics Olympiad, compared to just 13% for its predecessor, GPT-4o.

The new model isn’t just about raw performance, though. OpenAI claims o1 is more explainable and adheres more closely to safety guidelines. In fact, it scored significantly higher on tests designed to measure resistance to “jailbreaking” attempts.

However, o1 does come with some limitations. It’s currently text-only, can take longer to answer queries, and lacks the ability to browse the web or reason against specific documents. OpenAI is also implementing strict usage limits, with ChatGPT Plus users initially restricted to 30 messages per week.

The release of o1 introduces a new naming convention for OpenAI’s models, resetting the counter to 1. It will coexist with current models like GPT-4o in ChatGPT, rather than replacing them outright.

While o1 represents a significant advancement, OpenAI isn’t resting on its laurels. The company has confirmed it’s working on an even more powerful model in the GPT series, though no release date has been set.

As AI continues to evolve at a rapid pace, o1 showcases the industry’s push towards more thoughtful, capable, and potentially safer AI systems.

We’ll be checking out the this new model and bring you more information in the coming weeks.

Sources include: OpenAI blog, Axios

In a Forrester Consulting study commissioned by Sage, Canada is positioning itself at the forefront of ethical AI adoption, at least in terms of the accounting sector. The report, titled “Accounting 2030: Forecasting the Next Frontier in AI-Powered Transformation,” has a key finding – 76% of Canadian firms are engaged in regular ethics training, with 69% having established formal AI ethics policies.

This commitment to ethical governance isn’t just about compliance; it’s driving real business results in this sector. Canadian businesses are seeing significant improvements in forecasting and planning accuracy, with 31% reporting major enhancements due to AI integration. This strategic focus on leveraging AI for decision-making is paying off, particularly in areas like anomaly detection and streamlining monthly close processes.

According to the report, impact on hiring is perhaps counter-intuitive, with Canada seeing the most pronounced increase in hiring among all countries surveyed. 24% of firms report a significant uptick in recruitment, suggesting a proactive approach to integrating AI into more strategic roles.

However, it’s not all smooth sailing. The study highlights potential vulnerabilities in data security, with a surprising 21% of Canadian firms reporting no specific measures to manage AI-related security and privacy risks. This presents a clear area for improvement as Canada continues to scale its AI initiatives.

Looking ahead, Canadian financial leaders are optimistic about AI’s potential. 53% believe AI will allow for the complete elimination of monthly closes, and 63% affirm that AI will significantly streamline operations by 2030.

Moreover, 40% predict that real-time data will become the primary basis for major financial decisions, signaling a shift towards more immediate, AI-driven decision processes.

So for Canada, addressing data security gaps will be crucial but if it can address this, and continue maintaining this strong emphasis on ethical practices, Canada may be well-positioned to remain a global leader in the responsible and effective use of AI in accounting.

Sources include: Forrester Consulting, Sage

Meta has admitted that all publicly shared posts and photos from Facebook and Instagram, dating back to 2007, have been used to train its AI models. During a government inquiry in Australia, Meta’s global privacy director, Melinda Claybaugh, confirmed that unless users actively set their posts to private, the data has been scraped for AI training purposes.

Senator David Shoebridge pushed for clarification, asking if all public data since 2007 had been collected, to which Claybaugh responded, “Correct.” Meta’s privacy policies note that public posts are used for AI training, but there is no option to opt out unless users set their posts to private—except for those in the EU, where stricter regulations allow users to opt out of AI training.

Meta has been vague about when it started scraping data or how long the process has been going on, raising concerns over the use of posts from users who were minors at the time. The company has assured that it doesn’t scrape data from current minors but didn’t clarify whether adult accounts created by minors are included. While European users can opt out, and Brazil has recently banned Meta from using personal data for AI training, most users globally have no such options.

Sources include: The Verge  and ABC News

For tech leaders, facing a ransomware attack can be a nightmare—your systems are locked, your data is stolen, and you’re staring down a ransom demand. But what happens when you pay up and the decryptor doesn’t work? That’s what some execs hit by the Hazard ransomware recently found out.

According to a report by The Register, after one company paid the ransom, the decryptor they received failed to unlock their files. After multiple attempts, including contacting the criminals’ so-called “technical support,” they were left with a broken decryptor and no further communication from the attackers.

Mark Lance, a ransomware negotiator at GuidePoint Security, emphasized how stressful this situation is, stating that paying the ransom doesn’t always lead to data recovery. In this case, a third-party firm had to step in and patch the decryptor, ultimately brute-forcing their way to unlocking the files.

This incident underscores an important point: paying the ransom doesn’t guarantee success. Even when ransomware groups claim high decryption success rates, you’re still dealing with criminals, and trust is never assured.

Sources include: The Register

Just a note that on our weekend edition, we’re going to run an interview that was a hit with our Cyber Security Today audience that deals with how to deal with the impact of a breech.

And that’s our show for today. You can find show notes at our news site technewsday.com or .ca take you pick.

Thanks for listening. I’m your host Jim Love, have a Fabulous Friday.

 

SUBSCRIBE NOW

Related articles

Tik Tok claims constitutional right of free speech violated by ban. Hashtag Trending for Wednesday, September 18, 2024

Welcome to Hashtag Trending! I'm your host, Jim Love. PepsiCo masters cloud cost management with FinOps. TikTok fights a...

London transit insists 30,000 employees come in person to change their passwords: Cyber Security Today for Wednesday, September 18, 2024

New Ransomware Group Repellent Scorpius Emerges, London Transport Authority (TfL)  Mandates In-Person Password Resets After Cyberattac,  Chinese National...

“AI fueled surveillance will monitor citizen’s behaviour.” Larry Ellison. Hashtag Trending for Tuesday, September 17, 2024

Billionaire Larry Ellison predicts an AI-fueled surveillance system that monitors citizens' behavior.  Amazon tells employees to return to...

Fortinet data breach loses 440 GB of data. Cyber Security Today for Monday, September 16, 2024

Welcome to Cyber Security Today. I'm your host, Jim Love. On today's show: • Fortinet confirms a data breach after...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways