Épisodes

  • Evolution of Grammarly AI and the future of work
    May 5 2025

    As an AI writing assistant, Grammarly has used AI technology from its inception. The popularity of large language models has led to a shift in which the writing assistant vendor moved from natural language processing to including large language models to help enterprise employees improve their writing as they work. This has led Grammarly to see a possibility in the part it can play in transforming the future of work.

    Featuring: Luke Behnke, head of Enterprise Product at Grammarly, an AI-powered assistant writing platform.

    In today’s episode, we cover:

    • Grammarly’s AI evolution
    • Agentic AI and the future of work
    • AI technology as an assistant and not a replacement for work

    and more.

    To learn more about AI and Grammarly, check out SearchEnterprise AI.

    To watch video clips from our podcast, subscribe to our YouTube channel, @EyeonTech.

    References:

    • Grammarly AI and an update to the writing tool
    • What will be the future of the workplace?
    • Top 4 AI writing tools for improved business efficiency
    Voir plus Voir moins
    37 min
  • Responsible AI and the need for AI safety standards
    Apr 22 2025

    A key truth about AI is that regulation has long lagged innovation. However, this has not removed the responsibility of enterprises to deploy AI systems responsibly or for AI vendors to create responsible systems. What are the key metrics to understanding a safe AI system?

    Featuring: Stuart Battersby, CTO at Chatterbox Labs, vendor of a quantitative AI risk metrics platform, and Danny Coleman, CEO at Chatterbox.

    In today’s episode, we cover:

    • The difference between AI safety and responsible AI
    • The need for standards in AI safety
    • The future of AI safety in Enterprises

    and more.

    To learn more about responsible AI, check out SearchEnterprise AI.

    To watch video clips from our podcast, subscribe to our YouTube channel, @EyeonTech.

    References:

    • Assessing if DeepSeek is safe to use in the enterprise
    • EU, U.S. at odds on AI safety regulations
    • Responsible AI vs. ethical AI: What's the difference?
    Voir plus Voir moins
    33 min
  • Resilient AI: Siemens' journey into industrial AI and generative technologies
    Apr 8 2025

    Industrial AI is less familiar than consumer AI, but represents a critical and growing sector within AI’s influence. What unique AI applications are surfacing in this area?

    Featuring: Olympia Brikis, director of Industrial AI research at Siemens

    In today’s episode, we’ll cover…

    • Understanding Industrial AI and its distinctions from consumer AI
    • AI and, specifically, generative AI adoption at Siemens
    • The role of digital twins in testing AI recommendations

    and more.

    To learn more about AI in healthcare, check out Search Enterprise AI.

    To watch video clips from our podcast, subscribe to our YouTube channel, @EyeonTech.

    References:

    • CES 2024: Siemens eyes up immersive tech, AI to enable industrial metaverse
    • How businesses are using AI in the construction industry
    • Siemens forges digital twin deal with Nvidia for metaverse
    Voir plus Voir moins
    30 min
  • AWS developing high-performing autonomous AI agents
    Mar 25 2025

    Traditional, generative, agentic—in the past couple of decades, AI metamorphosed into an indisposable tool for enterprises wanting to streamline their processes and improve their impact. In this episode, we dive into the different types of AI, best practices for implementation, and the challenges faced in the industry.

    Featuring: Deepak Singh, Vice President at AWS

    In today’s episode, we’ll cover…

    • The difference between traditional AI, generative AI, and agentic AI
    • The role of agentic AI in software development
    • Best practices for implementing agentic AI

    and more!

    To learn more about agentic AI, check out Search Enterprise AI.

    To watch the video version our podcast, subscribe to our YouTube channel, @EyeOnTech.

    References:

    • AWS intros new foundation model line and tools for Bedrock
    • Amazon Q, Bedrock updates make case for cloud in agentic AI
    • Amazon to spend $100B on AWS AI infrastructure
    Voir plus Voir moins
    33 min
  • How the legal profession can benefit from AI technology
    Mar 11 2025

    In the couple of years since the popularization of ChatGPT, generative AI technology has quickly taken hold in the legal profession.

    It has backfired in some cases, such as when an attorney filed a legal brief written with ChatGPT's help and the AI platform hallucinated some of the cases in the brief. That case and others have led some law firms to block general access to AI tools. Most recently, Hill Dickinson, a law firm in the U.K., asked its staff not to use generative AI tools like ChatGPT.

    Many law firms are using generative AI tools, and some even market their own AI systems. AI vendors are also partnering with law firms and companies in the legal profession. In February, LexisNexis and OpenAI agreed to integrate OpenAI's large language models across its products.

    The success, and uncertainty, surrounding AI tools in the legal profession led James M. Cooper and Kashyap Kompella to write the book A Short and Happy Guide for Artificial Intelligence for Lawyers. Cooper is a law professor at California Western School of Law, while Kompella is CEO of AI analyst firm RPA2AI Research.

    In the book, Cooper and Kompella explore how lawyers can understand and use AI technology.

    "We saw an urgent need to upskill lawyers on AI," Kompella said on the latest episode of Informa TechTarget's Targeting AI podcast. "How do you move AI ethics and responsible AI into practice? You have to move them through lawyers. Lawyers are a big part of that equation."

    Kompella and Cooper argue that while numerous books for lawyers about AI exist, few focus on using the technology ethically.

    The authors also argue that while the legal profession has traditionally been slow to adopt new technologies, it can benefit from AI for several reasons. For example, AI technology can provide access to legal services for those in underserved areas like rural communities in the United States, Cooper said.

    "AI can be a game changer in terms of provision of legal services," he said.

    However, providing more education is the key to helping legal professionals understand AI technology.

    "The law school curriculum is not teaching AI or any technologies to the students, so there is a huge skill gap," Kompella said.

    Cooper added, "The skill sets of prompt engineering, of knowing how to use these AI tools and the dangers that come with them, should be rote in law schools now right from the first year. Those law schools around the world that embrace this idea are future-proofing their students. They're not going to have to play catch up."

    Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for Informa TechTarget's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series

    Voir plus Voir moins
    40 min
  • Good data strategy is needed for GenAI
    Feb 25 2025

    Without a good data strategy, generative AI becomes unusable technology for enterprises.

    This was true when ChatGPT started becoming popular, and it is even more accurate years later.

    The most recent example is the AI Chinese startup DeepSeek. While most AI cloud providers like Google, AWS and Microsoft now offer the DeepSeek-R1 reasoning model, many AI experts believe that enterprises might be hesitant to use it due to the data it was trained on.

    Despite DeepSeek's R1's innovation, it all comes down to the foundation, said Michelle Bonat, chief AI officer at AI Squared, an AI and data integration platform.

    "As GenAI expands and expands ... the fundamentals are the fundamentals," Bonat said on the latest episode of Informa TechTarget's Targeting AI podcast.

    She added that while many organizations may have started with GenAI by just putting up a chatbot, many have found that if they do not have good quality data, they might have to pause their GenAI initiatives.

    The reason is that the nature of generative AI systems is to produce responses. Therefore, if they do not have good-quality data, they tend to hallucinate.

    Thus, Bonat said the growth in GenAI initiatives across organizations has also led to an increase in conversation around data strategy, data quality and data cleanliness.

    "They're very much connected," she said. "GenAI has become important in the conversation that connects with data strategy, data quality, data cleanliness and also, ultimately, in responsible AI and governance within the organization."

    She added that enterprises should pay attention to data and responsible AI because it benefits their businesses.

    "It's a competitive advantage to have responsible AI," she continued. "Customers want AI systems they can trust. ... Being transparent and having responsible AI helps increase your brand reputation."

    Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for Informa TechTarget's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.

    Voir plus Voir moins
    38 min
  • Multilingual LLM revolves around synthetic data
    Feb 11 2025

    While some vendors are working to ensure large language models become better at reasoning, other AI vendors are making them compatible in multiple languages.

    Writer is a provider of a full-stack generative AI platform for enterprises.

    While the vendor provides a generative AI platform that enterprises can use to build generative AI capabilities into their workflows, it also offers a family of LLMs: Palmyra. The models support text generation and translation in numerous languages, including Spanish, French, Hindi and Russian.

    "Multilingual training data and models that can be as good in dozens of other languages as they are in English is something everybody should strive for," said Writer cofounder and CEO May Habib on a recent episode of Informa TechTarget's Targeting AI Podcast. Writer also uses large volumes of synthetic data to help build legal confidence in generative AI technology, Habib said.

    Writer also publishes data on how its models score for bias and toxicity.

    "We really want to make sure that we are compliant with folks' ESG [equity, sustainability and governance] guardrails and guidelines," Habib said.

    Writer recently raised $200 million in series C funding, bringing its valuation to $1.9 billion.

    Esther Shittu is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

    Voir plus Voir moins
    33 min
  • Cisco generative AI strategy hinges on CX and agents
    Jan 28 2025

    The contact center world is a difficult place, packed with frustration and stress.

    Digital communications giant Cisco sees its mission as easing that experience for human contact center workers and the customers they deal with every day.

    For that undertaking, the vendor has seized on generative AI and agentic AI as the vehicles to both automate and augment the work of humans, in essence, smartening up the traditional chatbots that have long helped companies interact with their customers.

    "We're to see a lot more of what I call event-based communication, proactive communication outbound that we do particularly well, powered by AI," said Jay Patel, senior vice president and general manager for customer experience at Cisco Webex, on the Targeting AI podcast from Informa TechTarget. "And then the response path to that is we think there will be AI agents involved in some of the more simple use cases.

    "For example, if you haven't paid a bill, they can obviously call you in the outbound call center, but probably a better way of doing it is probably to send you a message with a link to then basically make the payment," Patel continued.

    Like many other big tech vendors, Cisco deploys large language models (LLMs) from a variety of specialist vendors, including OpenAI and Microsoft. It also uses open models from independent generative AI vendor Mistral, as well as its own AI technology developed in-house or acquired by acquisition.

    "Fundamentally, what we are looking at is the idea of an AI engine for each use case, and within the AI engine you would have a particular LLM," Patel said.

    Among the generative AI-powered tools Cisco has assembled are Webex AI Assistant and Agent Wellness, to tend to the psyches of busy contact center human workers.

    "Customers call very frustrated; they may shout at somebody. And then if you've had a difficult call, the agent wellness feature will mean that the supervisor knows that this set of agents has had a set of difficult calls," Patel said. "Maybe they're the ones who need a break now. So, there are ways of improving employee experience inside the contact center that we think we can … use AI for."

    Shaun Sutner is senior news director for Informa TechTarget's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 35 years of news experience. Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems. Together, they host the Targeting AI podcast.

    Voir plus Voir moins
    37 min