Stephen Blystone Profile
Stephen Blystone

@BlyNotes

Followers
166
Following
2K
Media
8
Statuses
789

Father of twin girls. BSEE and MSCS (Intelligent Systems focus). Interests: #MachineLearning, #AI, #DeepReinforcementLearning, #DataScience, #SDN, #NFV

Dallas, TX
Joined July 2017
Don't wanna be here? Send us removal request.
@rharang
Rich Harang
3 years
Happy to see LLM security taken seriously by OWASP https://t.co/WQqb0coc0L Here's my (offhand) thoughts on the list: 1/
1
33
94
@ShunyuYao12
Shunyu Yao
3 years
Still use ⛓️Chain-of-Thought (CoT) for all your prompting? May be underutilizing LLM capabilities🤠 Introducing 🌲Tree-of-Thought (ToT), a framework to unleash complex & general problem solving with LLMs, through a deliberate ‘System 2’ tree search. https://t.co/V6hjbUNjbt
93
580
2K
@BritishNeuro
British Neuroscience Association
7 years
JUST. LOOK. AT. ALL. THESE. *FREE*. ARTICLES!! Highly readable, written by experts, looking at the last 50 years & ahead to what comes next for research on #consciousness, #memory, #hearing, #neuroprosthetics, #antipsychotics, emotions, #stemcells https://t.co/LWL6KHVo6w
3
46
77
@togelius
Julian Togelius
7 years
@ylecun @hardmaru I believe there are strong reasons to not only use gradient descent even when it is applicable. Namely that gradient descent performs inductive learning, stochastic methods can do something like hypothetico-deductivism, which is fundamentally more capable. https://t.co/krQACUK5IO
2
11
47
@radekosmulski
Radek Osmulski
7 years
This is how little code it takes to implement a siamese net using @fastdotai and @pytorch. I share this because I continue to be amazed.
12
105
511
@ylecun
Yann LeCun
7 years
@hardmaru While one should strive to use gradient-based optim whenever possible, there are situations where the only option is gradient-free. Gradient-based is way more efficient, but not always applicable. PS: my 1st paper on perturbative learning was at NIPS 1988
4
9
83
@hardmaru
hardmaru
7 years
This library from @ylecun’s lab implemented benchmarked versions of evolutionary computation algorithms such as: Differential Evolution, Fast Genetic Algorithm, CMA-ES, Particle Swarm Optimization. 🦎🐞🐜
@ylecun
Yann LeCun
7 years
NeverGrad: gradient free optimization library. Now open source.
2
57
248
@tarantulae
Christian S. Perone
7 years
For those who missed the "Neural Ordinary Differential Equations" ( https://t.co/upUhWNtpvC) presentation by David Duvenaud on NIPS, here is the video session: https://t.co/swdiFPr2Wu, it starts at 24:00.
0
4
14
@shivon
Shivon Zilis
7 years
Cards Against Machine Learning
34
902
4K
@DynamicWebPaige
👩‍💻 Paige Bailey
7 years
✨🧠Friendly periodic reminder that @TensorFlow supports traditional models like random forest, linear and logistic regression, k-means clustering, and gradient-boosted trees! The syntax is a bit more verbose than scikit-learn; but you get the added bonus of GPU acceleration. 👍
9
80
364
@seb_ruder
Sebastian Ruder
7 years
10 Exciting Ideas of 2018 in NLP: A collection of 10 ideas that I found exciting and impactful this year—and that we'll likely see more of in the future. https://t.co/iv29bxYbq4
13
540
1K
@sopharicks
Sophia
7 years
The code of the final project I worked on within the @OpenAI scholarship program is on Github. In the GridWorld environment an agent navigates according to commands to hit/avoid target cells. https://t.co/jj1m29bfio #MachineLearning #ArtificialIntelligence #AI #technology
Tweet card summary image
github.com
Contribute to SophiaAr/OpenAI-final-project development by creating an account on GitHub.
2
20
93
@juliaferraioli
julia ferraioli
7 years
A lot of what's considered "elective" in CS education should actually be required. HCI is a lot more relevant to most software engineers these days than something like compiler design.
@EvanMPeck
Evan Peck
7 years
Every time I prep my HCI course, I relive my frustration that HCI is relegated to elective status in (most) CS curriculum. At some point, don't we need to decide that it's actually important to teach processes to find problems and design solutions that meet real human needs?
17
24
187
@kareem_carr
Dr Kareem Carr
7 years
When a statisticians hear, "Successful people start their day at 4 a.m.", they think: 1. Waking early makes you successful? 2. Something about success makes it hard to sleep at night? 3. Success is lethal; Only early risers survive? 4. You did your survey at 4am. #epitwitter
137
2K
8K
@DynamicWebPaige
👩‍💻 Paige Bailey
11 years
Still the best thing that's ever been made for me. :) ♥, @letstechno! http://t.co/BCV2JfdVP5
5
24
122
@tirthajyotiS
Dr. Tirthajyoti Sarkar (data, AI, free thought)
7 years
0
2
4
@twiecki
Thomas Wiecki
7 years
#PyMC3 3.6 RC1 is released: https://t.co/9UPBM0o86v Please help us test it and report issues on our tracker, this is not production-ready yet. 3.6 will be the last release to support Python 2.
github.com
Update verison to 3.6.rc1.
1
15
43
@Miles_Brundage
Miles Brundage
7 years
"Continual Match Based Training in Pommerman: Technical Report," Peng and Pang et al.: https://t.co/rxBcN4rx1Z Winner of Pommerman competition
2
11
31
@evolvingstuff
evolvingstuff
7 years
Really cool design of a spiking NN that appears to actually outperform LSTM and GRU! Deep Networks Incorporating Spiking Neural Dynamics https://t.co/VeJBnkgPcB
0
31
85