* Kinesis Streams: Writes limited to 1K/sec and 1 MB / shard, reads limited to 2K/shard. Want a different read/write ratio? Nop, not possible. Proposed solution: use more shards. Does not scale automatically. There is another service called Kinesis Streams that does not offer read access to streaming data.
* EFS: Cold start problems. If you have small amount of data in EFS, reads and writes are throttled. Ran into into some serious issues due to write throttling.
* ECS: Two containers can not use same port on same node. Anti pattern to containers.
AWS services have lots of strings attached and minimums for usage and billing. Building such services (based on fixed quotas) is much easier than building services which are billed purely pay per use. This complexity + cost optimization pressures lead to complexity and require more human resources and time as well. AWS got good lead in Cloud space, but they need to improve their services without letting them rot.
Agree totally. The solution to overcome those shortcomings in AWS, is to sort of put bandaid's with more services (at least their suggestion). I do understand, its not feasible to provide service which fits for everyone, however it will be good if they solve the fundamental problem.
One more to add in the list.
In DynamoDB during peak (or rush hour) you can scale which increases the underlying replica's(or partitions) to keep the reads smooth. However, after the rush hour there is no way to drop those additional resources. May be someone can correct me, if I am wrong.
Thanks. Not the auto scaling part. I thought even manually if you scale up with new replica's, we can't scale down. I should read the manual and get a clear picture.
> * ECS: Two containers can not use same port on same node. Anti pattern to containers.
Could you elaborate for this? I'm not sure I understand, are you saying that 2 containers cannot be mapped to the same host port? Because that would seem normal, you can't bind to a port where there's already something listening. But I guess I must be missing something.
The OP is talking about how when using a classic load balancer in AWS, your containers will be deployed all exposing the same port, kind of like running "docker run -p 5000:5000" on each ec2 in your cluster. Once the port is in use, you can't deploy another of that container on the same ec2 node.
The solution is to use AWS's Application Load Balancers instead, which will allow you dynamically allocate ports for your containers and route traffic into them as ECS Services.
I'm not familiar with the details of AWS here, but maybe the OP means mapping two different host ports to the same port on two different containers? That's all I can imagine that would be a container antipattern in the way described.
That is perfectly possible with ECS, so I don't know what OP was referring to. The thing I remember though is that you have to jump through a lot of hoops like making 4 APIs calls (or worse with pagination) for what should have been a single call to make such a system work on ECS.
Nowaday you would often run containers with a container network(flannel, calico, etc.) that assigns an unique IP per container thus avoids conflicting port mappings regardless how many containers with the same port run on a single host.
* Kinesis Streams: Writes limited to 1K/sec and 1 MB / shard, reads limited to 2K/shard. Want a different read/write ratio? Nop, not possible. Proposed solution: use more shards. Does not scale automatically. There is another service called Kinesis Streams that does not offer read access to streaming data.
* EFS: Cold start problems. If you have small amount of data in EFS, reads and writes are throttled. Ran into into some serious issues due to write throttling.
* ECS: Two containers can not use same port on same node. Anti pattern to containers.
AWS services have lots of strings attached and minimums for usage and billing. Building such services (based on fixed quotas) is much easier than building services which are billed purely pay per use. This complexity + cost optimization pressures lead to complexity and require more human resources and time as well. AWS got good lead in Cloud space, but they need to improve their services without letting them rot.