Also on the generative AI spectrum, Microsoft this week unveiled that its Project Bonsai reinforcement learning will be supported by d-Matrix DIMC technology. The aim is to speed up AI inference. For context, generative AI’s use of transformer models is imperative to its functionality but is also a resource-intensive process. Inference systems in AI assist with predicting and building results from a model. Microsoft’s move to accelerate the process will help increase the efficiency and deployment of generative AI models.

Nvidia also made strides this week with the announcement of advancements aimed at improving its Omniverse, extending scientific applications on top of high-performance computing systems. The company said this will allow digital twins to bring together the data that currently sits siloed across various apps, models and user experiences. Lead product manager of accelerated computing, Dion Harris, said it’s a step toward evolving digital twins from passively modelling to actively shaping the world.

Meanwhile, Intel‘s news this week focused on shaping the world in a different way: Eliminating deepfakes. The company introduced a new tool dubbed the FakeCatcher, which it claims has a 96% accuracy rate and works by analyzing the “blood flow” from an image or video and returns results in real time.

Event

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry-specific case studies on December 8. Register for your free pass today.

Unsurprisingly, the rise of new technologies like deepfakes and new strides in AI, begs the need for increased security across sectors. In a VentureBeat special report on zero-trust security released this week, our writers highlight how security is being put to the test and why a zero-trust approach is a future. Part of the in-depth look also examines the ways in which some enterprises are getting zero-trust wrong, including the failure to understand what zero trust is at its core and how to properly apply it.

Here’s more from our top 5 tech stories of the week:

  1. New DALL-E integration adds generative AI for next-level slidesTome, announced interactive slide options supported by OpenAI’s DALL-E technology. The company, which calls itself the “new storytelling format for work and important ideas,” says it was a natural fit to add a generative AI dimension to decks.

    “Making that a part of the storytelling creation experience just felt really natural,” Tome CEO Keith Peiris told VentureBeat. “It felt so much more powerful than looking for a stock photo or clip art — it’s kind of giving us a first look at what generative storytelling can look like.”


  1. Nvidia Omniverse to support scientific digital twinsNvidia has announced several significant advances and partnerships to extend the Omniverse into scientific applications on top of high-performance computer (HPC) systems.

    This will support scientific digital twins that join together data silos currently existing across different apps, models, instruments and user experiences.


  1. Why enterprises are getting zero trust wrongThe reality of zero-trust adoption is that it’s a journey and not a destination. There is no quick fix for implementing zero trust because it’s a security methodology designed to be continuously applied throughout the environment to control user access.

    One of the most significant reasons that enterprises are getting zero trust wrong is not just about understanding what zero trust is, but also knowing how to apply it, and which products can implement zero trust.


  1. New Microsoft partnership accelerates generative AI development Microsoft and d-Matrix announced that the Microsoft Project Bonsai reinforcement learning will be supported on the d-Matrix DIMC technology, which the two vendors hope will provide a significant acceleration for AI inference. “Project Bonsai is a platform which enables our version of deep reinforcement learning and we call it machine teaching,” Kingsuk Maitra, principal applied AI engineer at Microsoft, told VentureBeat.

  1. Intel unveils real-time deepfake detector, claims 96% accuracy rateOn Monday, Intel introduced FakeCatcher, which it says is the first real-time detector of deepfakes — that is, synthetic media in which a person in an existing image or video is replaced with someone else’s likeness.

    Intel claims the product has a 96% accuracy rate and works by analyzing the subtle “blood flow” in video pixels to return results in milliseconds.