28 Mar 2022

Upload large file by chunks by S3 Signed URL in OpenNetwork of Revit Design Automation

By Xiaodong Liang

Declaimer: OpenNetWork feature of Design Automation is still in pre-release. It may subject to change or be discontinued before final release. 

One of the typical workflows of Design Automation (DA) is the work item is specified with outputting the result data to Forge bucket, personal buckets of Forge account, or the buckets of BIM360, Autodesk Construction Cloud (ACC) etc. When work item completes, currently, DA will automatically upload the output by sending file to the bucket with the signed resource, or the new way of S3 signed url.  All are done by Data Management API e.g.

//send file to the bucket directly: Note this way will be deprecated around Sep, 2022 

"result": {
      "verb": "put",
      "url": "https://developer.api.autodesk.com/oss/v2/buckets/<bucket>/objects/<model>.rvt",
      "headers": {
             "Authorization": "Bearer <token>"
         }
    }
//upload with signed resource.  

"result": {
      "verb": "put",
      "url": "https://developer.api.autodesk.com/oss/v2/signedresources/<guid>?region=US",
      "headers": {
             "Authorization": "Bearer <token>"
         }
    }
//upload with S3 signed url.  

"result": {
      "verb": "put",
      "url": "https://s3.amazonaws.com/com.autodesk.oss-persistent/65/ec/c9/155f4d54487500da443cfbbd9ecf5ebf81/wip.dm.prod?response-cache-control=private%3B%20must-revalidate%3B%20no-cache&response-content-disposition=attachment%3B%20modification-date%3D%22Sun%2C%2020%20Feb%202022%205%3A05%3A53%20%2B0000%22&response-content-type=application%2Foctet-stream&X-Amz-Security-Token=......&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20220221T090914Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Credential=ASIAYZTHFNX527RY7LF7%2F20220221%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=2ab16d9d78b3038206325ea5996c0dbadd089e0b932bbea86751e81445f1fe5f"
      
    }

In some cases, the output might be a large file. By the ways above, Design Automation would probably throw the exception of TIMEOUT. The exception is from Data Management. The best approach is to upload large file in chunks. However, DA has not provided the mechanism yet, but we could implement the mechanism by OpenNetWork (see this blog for more details).

In the workflow, one of the tricks in Revit Design Automation is: the custom job is processed inside HandleDesignAutomationReadyEvent. Normally, the plugin SaveAs to an updated/new model or Export to other custom data. If it is an updated/new model, it is still locked in the phase of HandleDesignAutomationReadyEvent because it is not opened in share mode. It will prohibit to read the file stream if we want to split the model file to chunks.

By investigation, the event OnShutdown can be suitable. In this event, the output model file has been unlocked. Obviously, the file is available in the working directory of the work item. We can find it directly and use Data Management API to upload.

With Data Management API newly released the endpoints of S3 signed url, we can now use the new way to upload the chunks. The NET SDK has not packaged (coming soon), so the demo of this blog implements the workflow by raw HTTP requests. OpenNetWork is pre-release and requires allow-list, if you want to check the complete plugin, activity, appbundle etc, please email to forge.help @ autodesk.com, ccing to xiaodong.liang @autodesk.com. 

Note: Currently, Revit API doesn’t support multi-thread well. When the asynchronous calls are mixed with Revit API, you may get exceptions from Revit API complaining that the calling is not from main thread. As a workaround for now, you may wrap up the asynchronous call to be synchronous one before calling Revit API. This is what the line does:

s = Task.Run(() => post_run_on_shutdown(upload_model_filename)).Result;
namespace DALargeModelRevitOpenNetwork
{
    //upload status for output of work item
    public class uploadStatus
    {
        public int Status { get; set; } = 200;
        public string Message { get; set; } = "success";
        public bool IsChunks { get; set; } = false;
        public string ChunkSessionId { get; set; } = "";
    }
    //input parameters with token, bucket, and object
    internal class ForgeOSSParam
    {
        public string access_token { get; set; } = "";
        public string bucket_key { get; set; } = "";
        public string object_key { get; set; } = "";
    }
    //response of getting signed url of S3
    internal class Response_GET_S3_Signed_URL
    {
        public string uploadKey { get; set; } = "";
        public List<string> urls { get; set; }
    }
    //response of uploading file stream to S3
    internal class Response_Upload_Stream_S3
    {
        //this is actually from header
        public string eTag { get; set; } = "";
    }
    //response of completing uploading
    internal class Response_Complete_Upload
    {
        public string bucketKey { get; set; } = "";
        public string objectId { get; set; } = "";

        public string objectKey { get; set; } = "";

        public int size { get; set; }
        public string contentType { get; set; } = "";
        public string location { get; set; } = "";
    }


    [Autodesk.Revit.Attributes.Regeneration(Autodesk.Revit.Attributes.RegenerationOption.Manual)]
    [Autodesk.Revit.Attributes.Transaction(Autodesk.Revit.Attributes.TransactionMode.Manual)]
    public class DALargeModelRevitOpenNetworkApp : IExternalDBApplication
    {
        private const int UPLOAD_CHUNK_SIZE = 5; // Mb
        private const string FORGE_BASE_URL = "https://developer.api.autodesk.com";

        // input params with Forge OSS info: token, bucket key and object key
        const string input_json = "inputParam.json";

        //the model that is saved as by Design Automation
        //will be uploaded by custom code in OpenNetWork
        const string upload_model_filename = "result.rvt";

        //the output json with uploading status
        const string output_json = "output.json";

        public ExternalDBApplicationResult OnStartup(Autodesk.Revit.ApplicationServices.ControlledApplication app)
        {
            DesignAutomationBridge.DesignAutomationReadyEvent += HandleDesignAutomationReadyEvent;
            return ExternalDBApplicationResult.Succeeded;
        }

        public ExternalDBApplicationResult OnShutdown(Autodesk.Revit.ApplicationServices.ControlledApplication app)
        {
            //in shutdown, the saveAs model has been released from Revit engine
            //start to upload the model to Forge bucket

            Console.WriteLine("starting to upload model to Forge bucket by OpenNetWork... ");

            //make an output for DA workitem with uploading status
            uploadStatus s = new uploadStatus();

            string workingFolder = Directory.GetCurrentDirectory();
            string filePath = workingFolder + "\\" + upload_model_filename;

            try
            {
                if (File.Exists(filePath))
                {
                    Console.WriteLine("found the output model in working directory.");
                    s = Task.Run(() => post_run_on_shutdown(upload_model_filename)).Result;
                    Console.WriteLine("ended to upload model by OpenNetWork... ");
                }
                else
                {
                    PrintAndStoreStatus(s, 404, "cannot find output model in working directory!");
                }
            }
            catch (Exception ex)
            {
                PrintAndStoreStatus(s, 500, "general exception with uploading model:" + ex.Message);
            }

            Console.WriteLine("writing status to to output.json... ");
            string jsonStr = JsonConvert.SerializeObject(s);
            System.IO.File.WriteAllText(workingFolder + "\\" + output_json, jsonStr);
            Console.WriteLine("end writing status to  to output.json... ");

            return ExternalDBApplicationResult.Succeeded;
        }


        //resumable upload the result model
        public static async Task<uploadStatus> post_run_on_shutdown(string file_name)
        { 

            uploadStatus r = new uploadStatus(); 

            string workingFolder = Directory.GetCurrentDirectory();

            //get access token, bucket key and object key from input param (say input.json)
            //another option is to get these data from the endpoints of your server (also by OpenNetWork)
            string inputParamPath = workingFolder + "\\" + input_json;
            if (File.Exists(inputParamPath))
            {
                Console.WriteLine("reading input info of OSS ... ");   
                string jsonContents = File.ReadAllText(inputParamPath);
                Console.WriteLine("jsonContents ... " + jsonContents);

                ForgeOSSParam forge_oss_param = 
                        JsonConvert.DeserializeObject<ForgeOSSParam>(jsonContents);
                Console.WriteLine("forge_oss_param ... " + forge_oss_param.access_token);

                ObjectsApi objects = new ObjectsApi();
                objects.Configuration.AccessToken = forge_oss_param.access_token;

                string modelFilePath = workingFolder + "\\" + file_name;
                long fileSize = (new FileInfo(modelFilePath)).Length;

                const int UPLOAD_CHUNK_SIZE = 10 * 1024 * 1024; // 10 Mb 

                try{
                    r.IsChunks = fileSize > UPLOAD_CHUNK_SIZE;
                #region "chunks"
                    if (r.IsChunks){ // upload in chunks   
                         //how many chunks
                        int chunksCount = (int)Math.Round((double)(fileSize / chunkSize)) + 1;
                        Console.WriteLine("trying to get upload key.... ");
                        Response_GET_S3_Signed_URL signed_upload_contents = await getS3SignedUrl(forge_oss_param);
                        if (signed_upload_contents != null){
                            Console.WriteLine("[get upload key] succeeded ");
                            var uploadKey = signed_upload_contents.uploadKey; 
                            Console.WriteLine("trying to get signed urls of chunks.... "); 
                            signed_upload_contents = await getS3SignedUrl(forge_oss_param, chunksCount);
                            if (signed_upload_contents != null){
                                Console.WriteLine("[get signed urls of chunks] succeeded "); 
                                uploadKey = signed_upload_contents.uploadKey;
                                List<string> urls = signed_upload_contents.urls; 
                                long start = 0;
                                chunkSize = (chunksCount > 1 ? chunkSize : fileSize);
                                long end = chunkSize;

                                //3 upload chunk one by one (or in parallel) 
                                // make record with eTag array 
                                List<string> eTags = new List<string>(); 
                                using (BinaryReader reader = new BinaryReader(new FileStream(modelFilePath, FileMode.Open)))
                                {
                                    for (int chunkIndex = 0; chunkIndex < chunksCount; chunkIndex++)
                                    {
                                        long numberOfBytes = chunkSize + 1;
                                        byte[] fileBytes = new byte[numberOfBytes];
                                        MemoryStream memoryStream = new MemoryStream(fileBytes);
                                        reader.BaseStream.Seek((int)start, SeekOrigin.Begin);
                                        int count = reader.Read(fileBytes, 0, (int)numberOfBytes);
                                        memoryStream.Write(fileBytes, 0, (int)numberOfBytes);
                                        memoryStream.Position = 0;

                                        Response_Upload_Stream_S3 upload_stream_s3 =
                                            await uploadStream(urls[chunkIndex], memoryStream);

                                        if (upload_stream_s3 != null){
                                            Console.WriteLine("[upload chunk stream] succeeded ");
                                            //get one eTag
                                            string eTag = upload_stream_s3.eTag;
                                            eTags.Add(eTag);
                                        }
                                        else{
                                            Console.WriteLine("[upload one chunk stream] failed ");
                                        }
                                        start = end + 1;
                                        chunkSize = ((start + chunkSize > fileSize) ? fileSize - start - 1 : chunkSize);
                                        end = start + chunkSize;
                                    } 
                                    if (eTags.Count == urls.Count){
                                        Console.WriteLine("[upload ALL chunks stream] succeeded ");
                                        //4. tell Forge to complete the uploading
                                        Response_Complete_Upload complete_Upload = await completeUpload(uploadKey, fileSize, eTags, forge_oss_param);
                                        if (complete_Upload != null) {
                                            Console.WriteLine("completed uploading single model ");
                                        }
                                    }
                                    else{
                                        Console.WriteLine("[some chunks stream uploading] failed "); 
                                    } 
                                } 
                            }
                            else{
                                Console.WriteLine("[get signed urls of chunks] failed ");

                            }
                        }
                        else {
                            Console.WriteLine("[get upload key] failed ");
                        } 
                    } 
                #endregion
                #region "single"
                    else { // upload in a single call
                        //1. get upload key and signed url
                        Response_GET_S3_Signed_URL signed_upload_contents = await getS3SignedUrl(forge_oss_param);
                        if (signed_upload_contents != null) {
                            Console.WriteLine("get signed url succeeded with single model ");
                            var uploadKey = signed_upload_contents.uploadKey;
                            var urls = signed_upload_contents.urls; 
                            var signed_url = urls[0]; //because single part, one url only.
                            //2. upload binary to bucket by the signed url
                            using (StreamReader streamReader = new StreamReader(modelFilePath)){
                                Response_Upload_Stream_S3 upload_stream_s3 = await uploadStream(signed_url, streamReader.BaseStream);
                                if (upload_stream_s3 != null){
                                    Console.WriteLine("uploaded binary succeeded with single model ");
                                    //3. tell Forge to complete the uploading
                                    Response_Complete_Upload complete_Upload = await completeUpload(uploadKey, fileSize,null, forge_oss_param);
                                    if (complete_Upload != null){
                                        Console.WriteLine("completed uploading single model ");
                                    } 
                                } 
                            }
                        }
                    }//single upload end
                 #endregion
                }
                catch (Exception e){
                   PrintAndStoreStatus(r, 500, "general exception: " + e.Message);
                } 
            }
            else {
                  PrintAndStoreStatus(r, 404, "cannot find input params for Forge bucket & object"); 
            }  
            return r; 
        }


        private static async Task<Response_GET_S3_Signed_URL> getS3SignedUrl(ForgeOSSParam forge_oss_param, int chunksCount = 0)
        {
            string uploadUrl = string.Format(FORGE_BASE_URL +
                                            "/oss/v2/buckets/{0}/objects/{1}/signeds3upload",
                                            forge_oss_param.bucket_key,
                                            forge_oss_param.object_key);
            IRestClient client = new RestClient(uploadUrl);
            RestRequest request = new RestRequest(Method.GET);
            request.AddHeader("Authorization",
                              string.Format("Bearer {0}",
                              forge_oss_param.access_token));
            if (chunksCount > 0)
            {
                request.AddParameter("firstPart", 1);
                request.AddParameter("parts", chunksCount);
            }
            IRestResponse response = await client.ExecuteAsync(request);

            if (response.StatusCode == System.Net.HttpStatusCode.OK)
            {
                var signed_upload_contents =
                    JsonConvert.DeserializeObject<Response_GET_S3_Signed_URL>(response.Content);

                return signed_upload_contents;
            }
            else
            {
                return null;
            }
        }

        private static async Task<Response_Upload_Stream_S3> uploadStream(string signed_url, Stream ms)
        {
            IRestClient client = new RestClient(signed_url);
            RestRequest request = new RestRequest(Method.PUT);
            request.AddParameter("application/octet-stream", toByteArray(ms), ParameterType.RequestBody);

            IRestResponse response = await client.ExecuteAsync(request);
            if (response.StatusCode == System.Net.HttpStatusCode.OK)
            {
                Response_Upload_Stream_S3 res = new Response_Upload_Stream_S3();
                res.eTag = response.Headers.ToList().Find(x => x.Name == "ETag").Value.ToString();
                //remove backsplash
                res.eTag = res.eTag.Replace("\"", string.Empty);
                return res;
            }
            else { return null; }
        }


        private static async Task<Response_Complete_Upload>
            completeUpload(string uploadKey, long size, List<string> eTags, ForgeOSSParam forge_oss_param)
        {
            string uploadUrl = string.Format(FORGE_BASE_URL +
                               "/oss/v2/buckets/{0}/objects/{1}/signeds3upload",
                               forge_oss_param.bucket_key, forge_oss_param.object_key);

            IRestClient client = new RestClient(uploadUrl);
            RestRequest request = new RestRequest(Method.POST);
            request.AddHeader("Authorization", string.Format("Bearer {0}",
                forge_oss_param.access_token));
            request.AddHeader("x-ads-meta-Content-Type", "application/octet-stream");
            request.AddHeader("content-type", "application/json");
            request.AddHeader("x-request-id", "d40f6f42-80be-4647-9f3a-277945f22196");

            if (eTags != null)
            {
                request.AddJsonBody(new
                {
                    uploadKey = uploadKey,
                    size = size,
                    eTags = eTags
                });
            }
            else
            {
                request.AddJsonBody(new
                {
                    uploadKey = uploadKey
                });
            }

            IRestResponse response = await client.ExecuteAsync(request);

            if (response.StatusCode == System.Net.HttpStatusCode.OK)
            {
                //succeeded
                Console.WriteLine("uploaded succeeded with single model ");


                var complete_upload_contents =
                    JsonConvert.DeserializeObject<Response_Complete_Upload>(response.Content);

                return complete_upload_contents;
            }
            else
            {
                return null;
            }
        }

        private static void PrintAndStoreStatus(uploadStatus r, int s, string m)
        {
            Console.WriteLine(m);
            r.Status = s;
            r.Message = m;
        }


        public void HandleDesignAutomationReadyEvent(object sender, DesignAutomationReadyEventArgs e)
        {
            e.Succeeded = true;
            run(e.DesignAutomationData);
        }

        public static async void run(DesignAutomationData data)
        {
            if (data == null) throw new ArgumentNullException(nameof(data));

            Application rvtApp = data.RevitApp;
            if (rvtApp == null) throw new InvalidDataException(nameof(rvtApp));

            string modelPath = data.FilePath;
            if (String.IsNullOrWhiteSpace(modelPath)) throw new InvalidDataException(nameof(modelPath));

            Document doc = data.RevitDoc;
            if (doc == null) throw new InvalidOperationException("Could not open document."); 

            ModelPath path = ModelPathUtils.ConvertUserVisiblePathToModelPath(upload_model_filename);
            SaveAsOptions opts = new SaveAsOptions();

            if (doc.IsWorkshared)
            {
                opts.SetWorksharingOptions(new WorksharingSaveAsOptions { SaveAsCentral = true });
                WorksharingUtils.RelinquishOwnership(doc, new RelinquishOptions(true), new TransactWithCentralOptions());
            }
            doc.SaveAs(path, opts);
        }
    }
}

To make it simpler, in the demo:

  • The default output of work item is just a json that records the uploading status
  • The token, bucket name and object name are input as parameter () of work item. Ideally, the token should be generated in the process, calling the endpoint of your own server. The customer’s server calls authentication of Forge.
  •  
  1. demo log:
[03/25/2022 08:10:41] starting to upload one model in chunks... 
[03/25/2022 08:10:41] trying to get upload key.... 
[03/25/2022 08:10:41] [get upload key] succeeded 
[03/25/2022 08:10:41] trying to get signed urls of chunks.... 
[03/25/2022 08:10:42] [get signed urls of chunks] succeeded 
[03/25/2022 08:10:42] [upload 0 chunk stream] succeeded 
[03/25/2022 08:10:42] [upload 1 chunk stream] succeeded 
[03/25/2022 08:10:42] [upload 2 chunk stream] succeeded 
[03/25/2022 08:10:42] [upload 3 chunk stream] succeeded 
[03/25/2022 08:10:46] [upload 4 chunk stream] succeeded 
[03/25/2022 08:10:47] [upload 5 chunk stream] succeeded 
[03/25/2022 08:10:47] [upload 6 chunk stream] succeeded 
[03/25/2022 08:10:47] [upload 7 chunk stream] succeeded 
[03/25/2022 08:10:47] [upload 8 chunk stream] succeeded 
... ... ...
... ... ...
[03/25/2022 08:10:49] [upload 16 chunk stream] succeeded 
[03/25/2022 08:10:49] [upload 17 chunk stream] succeeded 
[03/25/2022 08:10:49] [upload 18 chunk stream] succeeded 
[03/25/2022 08:10:49] [upload 19 chunk stream] succeeded 
[03/25/2022 08:10:50] [upload 20 chunk stream] succeeded 
[03/25/2022 08:10:50] [upload 21 chunk stream] succeeded 
[03/25/2022 08:10:50] [upload 22 chunk stream] succeeded 
[03/25/2022 08:10:50] [upload 23 chunk stream] succeeded 
[03/25/2022 08:10:50] [upload 24 chunk stream] succeeded 
[03/25/2022 08:10:51] [upload 25 chunk stream] succeeded 
[03/25/2022 08:10:51] [upload 26 chunk stream] succeeded 
[03/25/2022 08:10:51] [upload ALL chunks stream] succeeded 
[03/25/2022 08:10:51] uploaded succeeded with single model 
[03/25/2022 08:10:51] completed uploading single model 
[03/25/2022 08:10:51] ended to upload model by OpenNetWork... 
[03/25/2022 08:10:51] writing status to to output.json... 
[03/25/2022 08:10:51] end writing status to  to output.json... 
[03/25/2022 08:10:55] Finished running.  Process will return: Success
[03/25/2022 08:10:55] ====== Revit finished running: revitcoreconsole ======
[03/25/2022 08:10:58] End Revit Core Engine standard output dump.
[03/25/2022 08:10:58] End script phase.
[03/25/2022 08:10:58] Start upload phase.
[03/25/2022 08:10:58] Uploading 'T:\Aces\Jobs\f5dc909bf4dd4372b7109e1bfb6ebcb4\output.json': verb - 'PUT', url - 'https://developer.api.autodesk.com/oss/v2/buckets/xiaodong-test-da/objects/output-s3.json'
[03/25/2022 08:10:59] End upload phase successfully.
[03/25/2022 08:10:59] Job finished with result Succeeded
[03/25/2022 08:10:59] Job Status:

    From the beginning, Data Management API has provided the endpoint of Resumable Uploading which can also upload file in chunks. We have also the demo code in OpenNetWork, while as the other blog mentions,  direct binary uploading will be deprecated around September 30 of this year. The best is to use the way of signed url of S3.

     

    P.s. the copyright of icon image of puzzle in the primary image  is https://iconarchive.com/show/small-n-flat-icons-by-paomedia/puzzle-icon.html.

    Related Article