TArch
@TArch64
Followers
13
Following
3K
Media
47
Statuses
2K
6 years of experience in Front-end development. Also know some shit about Back-end, DevOps, and AWS
Joined June 2019
Непопулярна думка: Ми в тій точці де висока якість картинки більше заважає ніж робить краще і було б ліпше аби в деяких іграх не так старатись зробити картинку а цікаві механіки. Будемо відверті що яка б не була піздата графіка вам на неї буде байдуже через 30хв
1
0
0
Хух, я вже було думав що моє правило прогнозу майбутнього не робоче, але ні працює як часи — правило «чи буде від цього гірше українцю? Якщо так, подія відбудеться 100%»
10
50
1K
I cant say about costs yet because i need more time to collect statistics. But with that pricing fargate charges I could use m7 large that was the most expensive choice of instances. The difference between m4medium and m7large was almost 300$ per month
0
0
0
Little update about my adventures with EKS fargate. I have migrated the cluster to t3medium and the number of instances decreased from estimated 24 to 18. I have saved on this 6 instances just by migrating to ec2 and specifying the real resource usage without fargate restrictions
1
0
0
For me fargate looks like a trap for you to spend more money on aws. So i would rather use ec2 nodes + cluster autoscaler instead of fargate. Generally autoscaller can request new nodes when kube need it and you can use different instance types depends on your task
0
0
0
When updating a kube version in eks admin panel you have a nice button to update all your pods on ec2 nodes but fargate dont have this button so you will need to manually restart all deployments and sts. Luckily you can write a script to iterate through namespaces and automate it
1
0
0
As I told in the start each fargate pod is new node so you will not have docker image cache at all because kube stores it at node. As a result you will have bigger start time and more traffic to ecs, and you will be billed for each byte you request from ecs
1
0
0
Also because of the issues with certificates kubectl run with attach options will not work so you need to manually create a pod, wait for a certificate and connect. Therefore simple command becomes a 70 lines long bash script
1
0
0
And all this time you cant access container logs. We have kube jobs in deploy but now we cannot actually see logs from it because 2 minutes it’s too long for them to run and time in codepipeline costs money
1
0
0
Also when something goes wrong with fargate it’s hard to google any info. For example since kube 1.26 fargate has issues with creating certificates to access the pod. Pod will run code but you need to wait 1-2 minutes to certificate be created and approved
1
0
0
Fargate is not suitable for cpu intensive tasks so for things like image processing you will have to use ec2 nodes. We tried to do it but it had terrible performance and costs more money. 2-3 c7 instances are chipper and have more performance than 20 fargate nodes
1
0
0
Luckily if you have enough space in your limits you can target 2 profiles into the same namespace and then migrate from old profile to new one however it’s not the most enjoyable task
1
0
0
The only volumes you can use is EFS and tools like prometheus doesn’t support it. A fargate profile will target all pods in namespace if you haven’t created labels to filter it out there is no way to exclude pod from profile without recreating the profile
1
0
0
Will be lucky if you have enough space on ec2 node groups to place this pods. Yes, you still will need an ec2 node group with fargate because not everything will run on fargate. I dont know all reasons to pods to fail on fargate but one of them that
1
0
0
A new namespace to it so as your app grows you will create new profiles until you understand that the max number of profiles is 10. In that case you need to delete existing profile to create new and deleting it will reschedule all pods created by this profile and you
1
0
0
To use fargate you need to create a profile in your eks cluster using cli or aws web. To create a profile you need specify a kube namespace and optionally labels to include pods but the main issue with it that profiles is immutable and you cannot add
1
0
0
That’s all about resources. Generally fargate nodes more expensive than ec2 so if you care about money you should avoid it
1
0
0