解决了,在帮助下这个要点 https://gist.github.com/JamesRandall/11088079#file-blobstoragemultipartstreamprovider-cs.
以下是我如何使用它,以及一个巧妙的“黑客”来获取实际文件大小,而无需先将文件复制到内存中。哦,速度快了一倍
(明显地)。
// Create an instance of our provider.
// See https://gist.github.com/JamesRandall/11088079#file-blobstoragemultipartstreamprovider-cs for implementation.
var provider = new BlobStorageMultipartStreamProvider ();
// This is where the uploading is happening, by writing to the Azure stream
// as the file stream from the request is being read, leaving almost no memory footprint.
await this.Request.Content.ReadAsMultipartAsync(provider);
// We want to know the exact size of the file, but this info is not available to us before
// we've uploaded everything - which has just happened.
// We get the stream from the content (and that stream is the same instance we wrote to).
var stream = await provider.Contents.First().ReadAsStreamAsync();
// Problem: If you try to use stream.Length, you'll get an exception, because BlobWriteStream
// does not support it.
// But this is where we get fancy.
// Position == size, because the file has just been written to it, leaving the
// position at the end of the file.
var sizeInBytes = stream.Position;
瞧,您已获得上传文件的大小,而无需将文件复制到 Web 实例的内存中。
至于获取文件长度before文件已上传,这并不那么容易,我不得不求助于一些相当不愉快的方法才能获得近似值。
In the BlobStorageMultipartStreamProvider
:
var approxSize = parent.Headers.ContentLength.Value - parent.Headers.ToString().Length;
这给了我一个非常接近的文件大小,相差几百个字节(我猜取决于 HTTP 标头)。这对我来说已经足够了,因为我的配额强制可以接受被削减的几个字节。
只是为了炫耀,这是内存占用,由极其准确和先进任务管理器中的性能选项卡。
Before - 使用MemoryStream,上传前读入内存
之后 - 直接写入 Blob 存储