The biggest meet-up for the developer community, Global DevSlam, is just days away, and the already jam-packed calendar of the event is getting richer with new speakers, sessions, and details.

Global DevSlam will bring together over 15,000 attendees from 170 countries, including the world’s greatest coding talents, decision-makers, and visionaries, at the Dubai World Trade Center from October 10-13. Global DevSlam will also host the PyCon MEA with the collaboration of the Python Software Foundation.


The conference agenda at Global DevSlam will be jam-packed with industry-leading discussions on Python, artificial intelligence, machine learning, blockchain, DevOps, Javatalks, metaverse, mobile, NFTs, gaming, quantum computing, cloud, Kubernetes, and many more disruptive technologies and many other disruptive trends.

Dr. Seth Dobrin is one of the exciting names that will take the stage at Global DevSlam. He will present how to implement responsible and human-centric AI to benefit more from artificial intelligence. Dobrin, a pioneer in artificial intelligence, data science, and machine learning, answered Dataconomy’s questions before the big event.

1. Can you share your responsible AI perspective with us? 

Dr. Seth Dobrin

Trust is essential to human beings: trust encapsulates aspects of humanity that define a responsible attitude that is inclusive and fair: without trust, relationships stall, and transactions fail. Trust and responsibility are vital aspects of our online and offline life, without which normal operations would come to a grinding halt. As technological advances continue at pace, one of the forces behind innovation is the application of artificial intelligence (AI). But AI has not had an easy ride, with issues originating from a lack of trust by design. As a result, ethical issues have blighted the image of AI, with concerns ranging from using AI to manipulate behavior to inherent racial and sex bias. As humans expand our technology repertoire to include AI-enabled systems, these systems must be fair, responsible and inclusive. To help the industry reach this objective and need the Responsible AI Institute has built a schema to score AI systems:

  • Ensure explainable and interpretable AI systems
  • Measure bias and fairness of AI systems
  • Validate systems operations for AI systems
  • Augment robustness, security, and safety of AI systems
  • Deliver accountability of AI systems
  • Enable consumer protection where AI systems are used

2. What does adopting responsible AI approaches promise for enterprises?

In short, better business value and this is backed by several pieces of research from PwC and MITSloan/BCG, whom both demonstrated that businesses placing a priority of responsible implementation of AI systems yield better business value, including better products and services, enhanced brand reputation, and accelerated innovation.

3. To achieve these benefits, what should enterprises consider in implementing responsible and human-centric AI?

Step 1: get a baseline – perform a responsible AI organizational maturity assessment

Step 2: Set clear AI policies, strong AI governance, and appropriate controls – policies, governance, and controls drive innovation as they set clear lanes for your internal teams and expectations for the humans your business interacts with.

Step 3: Maintain an inventory of automated systems – AI is built, acquired, and used widely across most enterprises, but there is not a central management of these assets.  This inventory needs to be built and maintained and should be part of the funding and procurement processes.

Step 4: Assess automated systems – especially where they impact the health, wealth, or livelihood of a human, AI, and automation in general needs to be understood on the six responsible AI dimensions or explainability and interpretability, bias and fairness, systems operations, robustness, security and safety, accountability and consumer protection.

4. What is the difference between responsible AI and trustworthy AI?

Trust is a component of responsibility. When organizations talk about trust, they are generally talking about the technical aspects of an AI system and not necessarily the organizational, process, or workflow aspects.

5. Would it be right to call the human-centered AI field a milestone toward artificial general intelligence, digressing from the vision of more autonomous and godlike AI forms?

We have been talking about AGI for more than 50 years. At that time, we keep saying it was 50 years away; the consensus is it is still 50 years away or more. There are many things we need to achieve technically for AGI to be a reality. On top of the technical aspects, there are many processes, workflow, and societal aspects that need to be considered while we are on that journey – responsibility and the human impact are definitely core considerations.

Previous post

NovelAI now offers NovelAIDiffusion, a text-to-image AI tool

Next post

Can an artificial intelligence-enabled invention be patented?