Use cases
- ETL processes: Extract, transform, and load data from various sources into a data warehouse at scheduled intervals.
- Batch processing: Perform batch processing tasks, such as aggregating data or running complex calculations during off-peak hours.
- Automated tasks: Send email reports, run billing for the month, or clean up stale data in a database.
- Content management: Publish content to various CMS platforms on a schedule.
- System maintenance: Run backups, rotate logs, and perform regular updates.
Key features
- Cron and rate schedules: Supports a wide variety of schedules including cron expressions and fixed rates (e.g. every 5 minutes).
- Massive scale: Scale to over 1 million schedules per account.
- No execution timeouts: There is no upper bound for task execution time, they will run until they exit.
- Automatic retry: Automatically retry failed tasks until the maximum task age is reached.
- Parallelization: Run scheduled tasks in parallel. This is useful for things like sharding workloads across multiple tasks.
- Private networking: Jobs run in your private VPC and ECS cluster and you may connect to any of the other services in your environment using Cloud Map DNS queries
- Automated deployment: Simplify the deployment process by connecting a GitHub repository to your service. Redeploy your service automatically any time you push changes to your repository.
- No Dockerfile required: Just bring your own code. We can handle a wide variety of languages and frameworks out-of-the-box. Alternatively, bring your own Dockerfile.
- Observability: Troubleshoot issues quickly using integrated CloudWatch metrics and logs or your favorite third-party logging service (Datadog, Axiom, etc.)
How it works
FlexStack builds your code into a Docker container image and executes the image on a schedule using EventBridge Scheduler. EventBridge Scheduler scales to 1 million schedules per account and works without creating an event bus. We add ECS Fargate as an execution target and create a task definition referencing your container image in your ECS cluster.
Create a scheduled job
You can create a scheduled job with either a GitHub repository or a container image from a registry as a source. When you opt to connect a GitHub repository, your component will automatically redeploy any time you push the Branch you configure as a source. This process of automation is known as GitOps.
Connect a GitHub repository
To connect a GitHub repository, you'll first need to install the FlexStack app to your individual GitHub account, repositories, or a GitHub organization. Once installed, you can select a repository to connect as the source of the scheduled job.
Configuration options
Name: The name of your scheduled job. This needs to be unique across all services in your current environment.
Branch: The git branch that will be used to deploy your scheduled job when you push changes to it. For example, if you want to configure a branch specific to a "staging" environment, you might create a branch named "staging" off of main/master and connect that branch to your service.
Root directory: The root path within your git repository source code. If your repository is a monorepo, you might specify a package directory here e.g. /apps/backend
.
Schedule: Run the schedule on a regular interval, for example every 5 minutes or use a cron expression.
Deploy from container registry
To deploy an image from a container registry, click on the "Container image" button. At present, you will need to manually redeploy your component any time you want to update the image.
Configuration options
Name: The name of your scheduled job. This needs to be unique across all services in your current environment.
Image: The image and tag, for example redis:alpine
Start Command: The start command that's passed to the container. By default, the CMD
or ENTRYPOINT
directive in your Dockerfile are used as the start command.
Schedule: Run the schedule on a regular interval, for example every 5 minutes or use a cron expression.
Health checks
Health checks are used to ask a particular server if it capable of doing its job successfully. They are run continuously on an interval the entire time your server is running. When a server becomes unhealthy, it will be drained and replaced. Thus, services are self-healing. For that reason, we strongly recommend that all services enable health checks, particularly in production environments.
Configuration options
Health checks may be enabled and configured in the Deploy tab of your component.
Command: The command to run to check the health of your service.
Interval: The approximate amount of time in seconds between health checks of an individual target. The range is 5–300 seconds.
Timeout: The amount of time in seconds to wait for a health response code before failing. The range is 2–120 seconds.
Configuring the task size
By default, your service with start with 0.25 vCPU, 0.5 GiB RAM, and 20GiB of ephemeral storage. If you find your service is resource constrained and you're quickly maxing out the CPU or Memory utilization of your containers, it is a good idea to scale them up. You can do this in the Deploy tab for your component.
Pricing will vary depending on region, CPU architecture. and whether spot instances are enabled. Up-to-date pricing data can be found here.
CPU architecture
CPU architecture defines the instruction set, and memory models that are relied on by the operating system and hypervisor. Services can be configured to use one of two architectures: ARM and x86. You can configure this in the Deploy tab for your component.
On AWS, selecting the ARM architecture is the cost-effective option if you aren't using spot instances, as spot instances don't tend to be available for ARM architectures due to high demand.
Because support is fairly ubiquitous, it has higher "raw performance", and works with spot instances out-of-the-box, we default to the x86 architecture.
If your container image is multi-platform, we highly recommend selecting the "Flex" configuration option instead, which will select the best cost/performance option for your workload automatically.
If you're using FlexStack's auto-generated Dockerfiles, "Flex" is always the best option.
Task count
Configure the number of tasks to run in parallel for each scheduled job. This is useful for things like sharding workloads across multiple tasks. Defaults to 1.
Build arguments
Build arguments are a way to add configurability to your builds. Pass build arguments at build-time and set a default value that the builder uses as a fallback. Read Docker's build arguments documentation here. Build arguments are not secrets. If you need to pass sensitive information to the build, use secrets and variables instead.
A powerful use case for build arguments is that they can be used to configure FlexStack's autogenerated Dockerfiles.
You can configure build arguments in the Build tab of your component.
Secrets and environment variables
Store configuration and sensitive information like API keys, database passwords, and other secrets. Variables are injected into your service at runtime and during the build process. See the Secrets and Variables documentation for an in-depth look at how they work and how they're used.
These can be configured using the Secrets and variables tab in your component.
You will need to redeploy your service for changes to secrets to take effect.
Connect to other services
You may simply use the address http://dns.[COMPONENT_NAME].flexstack.internal:[PORT]
to connect to other services in your environment. To select a specific or random healthy instance within the namespace, you can use the AWS CloudMap DiscoverInstances API. Here is an example using TypeScript.