
PseudoGeek
@GeekPseudo
Followers
104
Following
956
Media
23
Statuses
1K
ideologue, obsessive learner, geek in evolution, IT, Cybersecurity, OSINT, Psyche, Web3, AI & global affairs
Joined February 2020
Thread 2 summarize @ProjectZeroBugs meticulous & amazing explanation of #NSO #ZeroClick #iMessage #RCE exploit. Edits r welcum😇 Tldr; Tgt is sent #PDF (faked as #GIF) containing #JBIG2 encoded img, to carry out #chained #exploitation of #iMsg impl of rendering GIFs/ PDF. 1/n
1
0
0
Today I learned basics of Reinforcement Learning (RL)! 🧠💡RL allows systems to learn through trial and error by receiving rewards for good actions & penalties for bad ones. Key applications include autonomous helicopters, robotic dogs, and stock trading. 🎮🚁🤖
0
0
0
Today I explored tensor-flow based implementation of Content-based Filtering using Neural Networks. And I solved an assignment of same, luckily got 100% marks. Alhumdulilah
0
0
0
Today I learned about the ethical challenges in recommender sys: how profit-driven models can amplify harmful content, like conspiracy theories, and exploitative businesses. Transparency &diverse perspectives are key to building sys that prioritize user well-being & societal good
0
0
0
Ethical Considerations: Design recommender systems to avoid unintended harm like filter bubbles and misinformation, prioritizing ethical and fair practices.
0
0
0
Trade-Offs: Increasing candidates improves coverage but slows down ranking; tuning the retrieval size helps balance performance and latency.
1
0
0
Optimization: Precompute item embeddings and compute the user embedding once for fast inference, using methods like inner product for efficient matching.
1
0
0
Two-Step Architecture: Use a retrieval step to generate a broad list of item candidates (100–1000) and a ranking step to accurately predict user preferences and rank those items.
1
0
0
Today I learnt about rcommender systems and optimizations for large catalogs, including the two-step architecture involving candidate generation and ranking steps.
1
0
0
Rating or preference is predicted by the dot product of user & item vectors, with a sigmoid for binary labels. Finding similar items by calculating distance b/w item vectors (vₘ) post-trg. Precompute item similarities offline for scalability, & feature engg is key to success.
0
0
0
Today I learnt how deep learning can be used for content based filtering. Deep learning maps user & item features to vectors (vᵤ, vₘ) and predicts interactions using the dot product. Trg involves optimizing both user & item networks together using squared error & regularization
1
0
0
Collaborative filtering recommends items on the basis of ratings from users with similar preferences. Content based Filtering relies on matching user (xᵤ) and item features (xₘ), used to compute vᵤ and vₘ (must have the same size) with different size xᵤ and xₘ
0
0
0
❌ One limitation of Collaborative Filtering is its inability to incorporate side information like demographics, location, or user preferences. Content-based filtering can address these issues and enhance recommendations. #RecommenderSystems #AI #ContentBasedFiltering
0
0
1
🚀 Collaborative Filtering helps recommend related items based on feature similarity (e.g., genre). It calculates the squared distance between feature vectors to find similar items. But, it struggles with new items or users with limited ratings. #MachineLearning #AI
1
0
1
Adam Optimizer: TensorFlow uses/ also provides the Adam optimizer for efficient optimization, offering better performance and faster convergence compared to traditional gradient descent.
0
0
0
Custom Cost Function: Collaborative filtering requires a custom cost function, which TensorFlow optimizes using automatic differentiation instead of standard neural network layers.
1
0
0
Automatic Differentiation (Auto Diff): TensorFlow simplifies gradient computation by automatically calculating derivatives of the cost function using gradient tape.
1
0
0
Today I learnt about how #tensorflow helps in automating complex tasks like gradient computation and optimization, enabling efficient model training without manual derivative calculations.
1
0
0
This results in better predictions, especially for new users, as it avoids predicting all ratings as zero. Mean normalization also speeds up the algorithm and improves convergence by ensuring the data has zero mean, which helps with efficient learning and regularization.
0
0
0
Mean normalization in recommender systems helps improve predictions for new users who haven’t rated any items by adjusting ratings to have a consistent average. It involves subtracting the average rating for each movie from all users' ratings for that movie.
1
0
0