When first using the tool, based on the `--help` output I did not
realize that `raw` was a supported format. Then, upon stumbling on a
GitHub issue that documents this format as being able to stream larger
images directly to disk, I found out that specifying `--format raw` does
not work, and leads to a failure relatively late in the image upload
process.
This documents that, when not specifying `--format`, a default format of
`raw` is assumed.
## 1.1.0 (2025-05-10)
### Features
* smaller snapshots by zeroing disk first (#101) (fdfb284)
### Bug Fixes
* upload from local image generates broken command (#98) (420dcf9)
## 1.1.0 (2025-05-10)
### Features
* smaller snapshots by zeroing disk first (#101) (fdfb284)
### Bug Fixes
* upload from local image generates broken command (#98) (420dcf9)
The base image used requires ~0.42Gi. Even if the uploaded image is
smaller, those bytes are currently not overwritten and still part of the
stored snapshot.
By zeroing the root disk first, those unwanted bytes are removed and not
stored with the snapshot.
This has two benefits:
1. Snapshots are billed by their compressed (shown) size, so small
images are now a bit cheaper.
2. The time it takes to create a server from the snapshot scales with
the snapshot size, so smaller snapshots means the server can start more
quickly.
This reduces the size of an example Talos x86 image from 0.42Gi before,
to 0.2Gi afterwards. An example Flatcar image was 0.47Gi before, and
still has that size with this patch.
There are two ways to zero out the disk:
- `dd if=/dev/zero of=/dev/sda` actually writes zeroes to every block on
the device. This takes around a minute to do.
- `blkdiscard /dev/sda` talks to the disk direclty and instructs it to
discard all blocks. This only takes around 5 seconds.
As both have the same effect on image size, but `blkdiscard` is SO MUCH
faster, I have decided to use it.
Even though only small images benefit from this, this is now enabled by
default as the downside (5 second slower upload) does not justify
additional flags or options to enable/disable this.
Closes#96
While adding support for qcow2 images in #69 I broke support for local
images. Building a shell pipeline through string concatenation is not a
good idea...
The specific issue was fixed and I also moved building the shell
pipeline to a separate function and added unit tests for all cases, so
it should be easier to spot these issues in the future.
Closes#97
In #68 I reduced the general limits for the back off, thinking that it
would speed up the upload on average because it was retrying faster. But
because it was retrying faster, the 10 available retries were used up
before SSH became available.
The new 100 retries match the 3 minutes of total timeout that the
previous solution had, and should fix all issues.
In addition, I discovered that my implementation in
`hcloudimages/backoff.ExponentialBackoffWithLimit` has a bug where the
calculated offset could overflow before the limit was applied, resulting
in negative durations. I did not fix the issue because `hcloud-go`
provides such a method natively nowadays. Instead, I marked the method
as deprecated, to be removed in a later release.
Use `goreleaser` and `ko` to automatically build and publish container
images in the release workflow. The images are published to
`ghcr.io/apricote/hcloud-upload-image`.
Co-authored-by: Ilja Malachowski <malahovskiy.in@gmail.com>
The CLI depends on the lib, and to make sure that users who install
through `go install` use the correct version, we need to cut a release
for the lib first, bump in CLI and then release CLI.