While adding support for qcow2 images in #69 I broke support for local
images. Building a shell pipeline through string concatenation is not a
good idea...
The specific issue was fixed and I also moved building the shell
pipeline to a separate function and added unit tests for all cases, so
it should be easier to spot these issues in the future.
Closes#97
In #68 I reduced the general limits for the back off, thinking that it
would speed up the upload on average because it was retrying faster. But
because it was retrying faster, the 10 available retries were used up
before SSH became available.
The new 100 retries match the 3 minutes of total timeout that the
previous solution had, and should fix all issues.
In addition, I discovered that my implementation in
`hcloudimages/backoff.ExponentialBackoffWithLimit` has a bug where the
calculated offset could overflow before the limit was applied, resulting
in negative durations. I did not fix the issue because `hcloud-go`
provides such a method natively nowadays. Instead, I marked the method
as deprecated, to be removed in a later release.
It is now possible to upload qcow2 images. These images will be
converted to raw disk images on the cloud server.
In the CLI you can use the new `--format=qcow2` flag to upload qcow2
images. In the library you can set `UploadOptions.ImageFormat` to
`FormatQCOW2`.
Because of the underlying process, qcow2 images need to be written to a
file first. This limits their size to 960 MB at the moment. The CLI
automatically checks the file size (if possible) and shows a warning if
this limit would be triggered. The library accepts an input with the
file size and logs a warning if the limit would be triggered.
Closes#44
- **CLI**: New flag `--server-type` that overrides the `--architecture`
flag and allows users to specify the server type they want
- **Lib**: New field in `UploadOptions`: `ServerType *hcloud.ServerType`
that overrides the `Architecture` field and allows users to specify the
server type they want
Closes#30