
Principled Tech
@PrincipledTech
Followers
2K
Following
1K
Media
8K
Statuses
11K
Win in the attention economy by partnering with PT for all your marketing, learning, and testing needs.
Durham, NC
Joined August 2011
For decision support system workloads, @Azure #Databricks outperformed Databricks on AWS by saving over 9 minutes while running four concurrent query streams. This kind of performance can help you accelerate decision making: .#DataDriven
0
0
0
Run your Llama 3 GenAI models on a @Dell AI Factory on-premises solutions instead of the cloud and save up to 63% over the next four years. Get the facts: .#DellAIFactory #GenAI
0
0
0
Time to update your servers? A @Supermicro_SMCI H14 Hyper Dual Processor server with AMD EPYC 9474F CPUs delivered up to 3.17x the database performance of a legacy system. Consolidate and save on licensing, power, and maintenance costs: .#consolidation
0
1
2
Host your own high-precision AI chatbots augmented with your in-house data on @Dell PowerEdge XE9680 servers powered by @AMD Instinct MI300X Accelerators running the very large Llama 3.1 405B LLM: #GenAI #PowerEdge #AMDInstinct
0
0
0
Did you know? Just one @DellServers #PowerEdge R7725 with @AMD #EPYC 9755 processors can handle the data analysis workloads of 14 legacy servers. Consolidating with these newer servers can boost performance while reducing costs: .#dellservers
0
1
2
The @Dell PowerEdge XE9680 server, equipped with eight @NVIDIA H100 SXM GPUs, handles large LLMs like Llama 3.1 405B using FP8 precision for seamless, domain-specific chatbot experiences: .#GenAI #PowerEdge #H100
0
0
0
We measured the decision support system performance of two @Databricks solutions: @Azure Databricks & Databricks on AWS. In our tests, Azure Databricks completed both lone & concurrent query streams in less time. Find out more: #BigData #CSPs #analytics
0
0
0
We proved that a new @Supermicro_SMCI H14 Hyper DP server equipped w/ two @AMD #EPYC 9474F CPUs can replace three older #servers running OLTP workloads—potentially saving an organization up to $1.7M over five years! Get the details here: #datacenter
0
0
1
Build your AI chatbot infrastructure on @Dell PowerEdge XE9680 servers with @AMD Instinct MI300X Accelerators and support complex chatbot conversations using the Llama 3.1 405B LLM: .#GenAI #PowerEdge #AMDInstinct
0
0
0
As this infographic shows, the @DellServers #PowerEdge R7725 powered by @AMD outperformed the HPE ProLiant DL380 Gen10 by 62.9% in transactions per minute per core, enabling faster database transactions and potential licensing savings: #dellservers
0
0
1
Learn how consolidating legacy systems onto new @Supermicro_SMCI H14 Hyper DP servers powered by @AMD EPYC 9474F can save your organization up to $1.7 million per new server over the next five years. Get the facts: .#consolidation #EPYC
0
1
4
Consider upgrading to the @DellServers #PowerEdge R7725 for your transactional database workloads. The server with @AMD EPYC processors supported 62.9% more transactions per core than a legacy server: #dellservers
0
1
3
Unlock next-level AI with @Dell PowerEdge XE9680 servers featuring @AMD Instinct MI300X Accelerators. This combo supports the very large Llama 3.1 405B LLM and delivers fast, accurate chatbot responses for up to 136 simultaneous users: #GenAI #PowerEdge
0
0
0
Invest smartly in AI infrastructure: @Dell PowerEdge XE9680 with @NVIDIA H100 GPUs offers a 5-year total cost of ownership around $8.3M for a 6-server rack supporting hundreds of chatbot users: .#GenAI #PowerEdge #H100
0
0
0