Thyaga Vasudevan
@thyaga12
Followers
205
Following
583
Media
28
Statuses
752
Vice President of Products @ Skyhigh Security | Cloud Security Expert | Investor | Tech Thought Leader | Industry Speaker 🚀
San Francisco, Bay Area
Joined September 2008
Very excited to be on the panel with @jamesmaguire discussing the challenges, issues and best practices in protecting and securing #AI services - very relevant topic for today's organizations.
🚀 Ready to dive into the fascinating world of governing #GenAI? Join the convo on Tues, Jan. 16 at 11am PT! @JamesMaguire from @eWeekNews and #TeamSkyhighSecurity's @thyaga12 will cover the challenges, issues, and best practices for mastering #AI! https://t.co/aLIId406AT
0
0
1
Every analyst - Cantor Fitz, Baird, JPMorgan, JMP sec, Cannacord, BMO, Capital One, Jefferies, Bernstein and Evercore RAISED their Price target on $NOW Gap fill at $975
1
2
15
Thanks @JamesMaguire for having me in the panel for the very important topic on Gen AI regulations and data security - totally enjoyed it #eweekchat
Thanks all! This has been another excellent #eWEEKchat. Serious insight today on gen AI -- huge complex topic. Great to see this monthly gathering. Stay tuned for next month’s chat!
0
0
0
A10. (2/2) Collaboration between industry, Govt. and other stakeholders is crucial for effective & adaptive AI regulation that addresses ethical, legal, and societal concerns. Expect ongoing developments in this dynamic landscape as technology continues to advance.#eweekchat
A10. (1/2). The future of AI regulation will likely involve a combination of industry self-regulation and government intervention. Companies will increasingly adopt ethical AI practices, but governments will play a key role in establishing legal frameworks. #eweekchat
0
0
1
A10. (1/2). The future of AI regulation will likely involve a combination of industry self-regulation and government intervention. Companies will increasingly adopt ethical AI practices, but governments will play a key role in establishing legal frameworks. #eweekchat
Q10. Final question: Overall, what’s your sense of the future of AI regulation, either at the company or higher levels? #eWEEKchat
0
0
1
I have faith in the security tools - lets leave it at that @AndiMann #eweekchat
0
0
0
A9. (2/2). While progress is being made, the effectiveness of regulation will depend on the willingness of nations to cooperate, establish common frameworks, and adapt regulations to the evolving AI landscape. #eweekchat
A9. (1/2). Achieving effective AI regulation at the national and international levels is challenging but crucial. International collaboration is needed to address the global nature of AI and ensure consistent standards. #eweekchat
0
0
2
A9. (1/2). Achieving effective AI regulation at the national and international levels is challenging but crucial. International collaboration is needed to address the global nature of AI and ensure consistent standards. #eweekchat
Q9. Big question for humankind: Do you believe that AI will be effectively regulated at the national / international level? #eWEEKchat
0
0
1
A8. (2/2). Industry self-regulation can set ethical standards, but governments will play a crucial role in establishing legal frameworks, addressing societal concerns, and enforcing compliance. A collaborative approach between the 2 is needed for this #eweekchat
A8. Long answer (1/2) - The future of AI regulation is likely a combination of self-regulation within the industry and government intervention. #eweekchat
0
0
2
A8. Long answer (1/2) - The future of AI regulation is likely a combination of self-regulation within the industry and government intervention. #eweekchat
Q8. Will the AI sector regulate itself? Or will AI regulation be government vs. industry conflict in the years ahead? #eWEEKchat
0
0
2
A7. Ensure vendors adhere to your ethical guidelines, transparency, and security standards. Clearly communicate regulatory expectations, conduct regular audits, and select vendors committed to responsible AI practices. #eweekchat
Q7. Strategies for working with vendors? How should a focus on regulating AI inform a company's dealings with vendors? #eWEEKchat
0
0
1
Good point @andimann - but what we are seeing is that sometimes the CTO does not want to go into the governance, compliance and security of AI - leaving that to the CISO or CAIO to do it #eweekchat
0
0
2
A6. Assign a Chief AI Officer (CAIO) or Chief Data Officer (CDO) or CISO - between them, they should oversee AI Data Security, regulation, ethics, and compliance. Sometimes hiring an executive with expertise in AI governance may be necessary. #eweekchat
Q6. Which executives should be responsible for regulating enterprise AI? Does the task require a new hire? #eWEEKchat
0
0
2
Agree with this philosophy - @sai_buddha - if all else fails, there are tools like #skyhighsecurity to help :-) #eweekchat
A4. Employees must understand that AI is a useful tool to improve productivity, but they are still responsible for their work product and for data security. They should use company-approved instances vs. public ones. #eWEEKChat
0
0
1
A5. (2/2) Provide regular training to staff on recognizing and mitigating security threats associated with AI. Foster a security-conscious culture, including practices on AI Data Security and vigilant behavior to reduce the risk of breaches.#eweekchat
A5 (1/2). Educate staff on ethical AI principles, potential biases, and responsible use. Foster a culture of vigilance, transparency, and accountability. Continuous training ensures staff remains adept at regulating AI in line with evolving ethical standards.#eweekchat
0
0
1
A5 (1/2). Educate staff on ethical AI principles, potential biases, and responsible use. Foster a culture of vigilance, transparency, and accountability. Continuous training ensures staff remains adept at regulating AI in line with evolving ethical standards.#eweekchat
Q5. Also regarding staff: what role can AI training play? Thoughts on strategy for training staff to better regulate company AI? #eWEEKchat
0
0
2
A4. Enforce responsible use policies: Educate employees on ethical AI practices, prohibit malicious intent, and ensure compliance with guidelines. Foster a culture of responsible AI use and continuous training to mitigate risks.#eweekchat
Q4. Drilling down: What about company staff and AI regulation: what rule(s) should guide employee use of generative AI? #eWEEKchat
0
0
1
A3. Implement robust cybersecurity measures: Regularly update AI systems, conduct vulnerability assessments, employ encryption, and establish secure access controls to mitigate potential security risks associated with AI deployment.#eweekchat
0
0
2
Data misuse and leakage from Generate AI services continues to be a huge problem in organizations - could not agree with you more. #eweekchat @AndiMann
0
0
2
A2. Unregulated generative AI can lead to misuse, ethical problems (bias, harmful content), legal issues, reputation damage, and security risks. Companies may face consequences for malicious use, legal violations, and harm to their image. #eweekchat
Q2. What problems arise when companies don’t properly regulate their generative AI tools or other AI instances? #eWEEKchat
0
0
2
A1. Regulating generative AI is challenging due to rapid technological advancements, a lack of understanding among policymakers, ethical concerns (deepfakes, bias), the security concerns and the need to balance innovation with responsible use. #eweekchat
Q1. What are the challenges companies face with regulating generative AI? Why is it difficult? #eweekchat
0
0
2