Tutorial - Publish workflow
How to read this tutorial
The tutorial does not cover all features of Accsyn, just a selected set.
Entries in Bold are made up data for this tutorial only, here you provide your own data as appropriate.
Labels in Italic correspond to the name of an input were you enter data.
Text in [ BRACKETS ] denotes a button within user interface, were LINK points out a link - for example a navigation link.
Python scripts from this tutorial is a available for at GitHub:
Youtube video explaining the workflow: https://www.youtube.com/watch?v=k4i90AhYUns
Scenario
In this tutorial, your fictive company "InterPost" uses external resources for image retouch services - take on or more images and perform retouch on each of them.
Your on-prem data resides on network disk "vol" mounted on Linux server "alpha" @ "/net/vol".
Task data for subcontractors are located in one directory per subcontractor: "/net/vol/_TO_VENDORS/<E-mail>/".
The server are able to run Python scripts located in folder "/net/vol/_SCRIPTS".
When testing publish, we use the test mail address "test@interpost.com" and the result of task should end up in directory: "/net/vol/proj/task001/data/proj_task001_v001".
(Optional) For files that cannot be recognised, offer to upload into directory: "/net/vol/_FROM_VENDORS/<E-mail>/<YYYYMMDD>/".
Your task would be to setup access to a folder here they grab work as assigned and perform their task. When they are done, subcontractor should be able to publish back results directly into the correct folder structure, with metadata that should be saved in production database and relevant notifications.
Installing Accsyn
The following guide is a short summary of the installation process described in detail here: Accsyn Admin Manual
Register your domain @ https://customer.accsyn.com
Follow the guide to initialise domain.
Install server; when the guide instructs you to install the Accsyn daemon, download and install it on your current file transfer server ("alpha").
Network; the guide will ask you to configure your firewall, add NAT port forwards 45190-45210 (tcp) to file transfer server "alpha". Note that Accsyn daemon DOES NOT listen to any ports 24/7, it will only start a listening process during file transfer init, software firewalled to accept incoming connections from remote client WAN IP only.
Root share; Browse to were the disk is mounted on server [/net/vol].
Finish installation.
By now you have a fully working file transfer solution that can be used by external user to receive and send back large file packages at high speed (i.e. an accelerated encrypted FTP server replacement). We are now going to continue and make file transfers submitted by Python scripts to enable an automised workflow.
Configuring Accsyn
Share material to test user
Note: This can be done using the Accsyn Python API in an automated way upon task assignment in your production database systems. For hints on how to do this, please check this tutorial: Tutorial - Automated Production Outsourcing.
Logon to your domain [https://geoscan.accsyn.com], using the admin account you registered with above.
Go to ADMIN>Shares and click [ CREATE SHARE ].
Browse to directory "/net/vol/" and select "_TO_VENDORS", click [ NEXT ].
Pick a name for your share, "TO_VENDORS" and then click [ CREATE ].
You will be directed to sharing dialog, hit [ SHARE DIRECTORY ] button.
Create the folder "test@interpost.com" (you are beneath share "TO_VENDORS", full path will be "/net/vol/_TO_VENDORS/test@interpost.com").
Select the newly created folder and click [ NEXT ].
The user is not known to Accsyn yet, choose "Invite new" option and enter their E-mail [test@interpost.com].
Make sure only Read is checked, we do not want them to upload here. For that they will be required to use Publish flow.
Click [ GRANT ACCESS ] when you are done.
User will now get an invite, be instructed on how to install desktop application, and be to see share "TO_VENDORS" and folder "test@interpost.com" only within their Accsyn. You can now copy work task source data into this folder in order to make it available to subcontractor.
Write pre-publish hook
We are now going to write the first hook out of two, the "job-pre-publish-server" hook as it is named with Accsyn. Here is were you can validate the filenames remote user has supplied and return appropriate feedback.
Accsyn provides the JSON data in a temporary file and then runs your configured script supplying the path to this data file. It also provides you a path to were you should write the result back in JSON format. This mechanism in Accsyn is called a "hook".
In this tutorial, during the test, the user will drag-n-drop three files/directories resulting in the following indata:
{
"hook":"job-pre-publish-server",
"user":"5d87825f045d0352d33435eb",
"user_hr":"test@interpost.com",
"files":[
{
"id":"96bc4b44-384b-497d-a119-3f07307627b6",
"filename":"proj_task001_v001",
"is_dir":true,
"size":200000,
"files":[{
"filename":"image.0001.tif",
"size":100000
},
{
"filename":"image.0002.tif",
"size":100000
}]
},{
"id":"44f29351-e870-4f9e-b329-a43759ed35c0",
"filename":"proj_task1_v001_preview.mov",
"size":1000
},{
"id":"1ff91107-d4d2-4de5-9d87-2b3ed1198e1c",
"filename":"proj_task001_v001_assets",
"size":20000,
"files":[{
"filename":"proj_task001_projectfile.xml",
"size":20000
}]
}
],
"size":221000
}
The pre-publish Python script need to:
Check that each file follows your naming convention - can be recognised and is not empty.
That all files are there for publish.
Return guidelines and what additional metadata needs to be input by user.
Using a Python capable text editor such as IDLE++ or Sublime, write the following script:
#!/usr/local/bin/python3
# Accsyn Hook example Python 2/3 script for processing an incoming publish request by user
import sys, os, json, copy, datetime
def generic_print(s):
try:
if ((3, 0) < sys.version_info):
# Python 3 code in this block
expr = """print(s)"""
else:
# Python 2 code in this block
expr = """print s"""
eval(expr)
except:
pass
def get_version(version_ident):
version = -1
parts = version_ident.split("v")
try:
version = int(parts[1])
except:
pass
return version
def validate_task_and_version(project_ident, task_ident, version_ident):
# Here you could check the project and task against your production database, for example directories on disk or by querying a project management system/Google sheet or similar.
if project_ident.lower() != "proj":
return "Unknown project '%s'!"%project_ident
if task_ident.lower() != "task001":
return "Unknown task '%s'!"%task_ident
if not version_ident.lower().startswith("v"):
return "Invalid version identifier '%s' - has to start with an 'v'!"%version_ident
parts = version_ident.split("v")
try:
version = int(parts[1])
except:
return "Invalid version identifier '%s' - must be a 'v' followed by integer number!"%version_ident
if version != 1:
return "Version %d is not the next publishable version!"%version
return None # All ok
if __name__ == '__main__':
p_input = sys.argv[1]
data = json.load(open(p_input, "r"))
generic_print("Pre Publish hook incoming data from user %s: %s"%(data['user_hr'], json.dumps(data, indent=3)))
generic_print("Analyzing data")
result = {
"guidelines":"<html><body color='white'>Please follow our naming convention for publishing back to us:<br><ul><li>Publish directory: <proj&ht;_<task&ht;_<vNNN></li><li>Publish preview: <proj>_<task>_<vNNN>_preview.mov|jpg</li><li>Publish assets: <proj>_<task>_<vNNN>_assets</li></ul><br><br>Select entries below and enter comment, time report and status:</body></html>",
"comment":True,
"statuses":[
{"label": "For approval", "value":"for_approval", "default":True},
{"label": "Work in progress", "value":"work_in_progress"},
],
"time_report":True,
"metadata":False,
"files":[]
}
ROOT_SHARE="/net/vol"
DAILY_FOLDER=datetime.datetime.now().strftime("%Y%m%d")
for entry in data['files']:
d = copy.deepcopy(entry) # Return what we get - preserve ID field
# Identify project, task
parts = entry['filename'].split(".")[0].split("_")
if 3<=len(parts):
warning_message = validate_task_and_version(parts[0],parts[1],parts[2])
if warning_message is None:
project = parts[0]
task = parts[1]
version = get_version(parts[2])
publish_ident = "%s_%s_v%03d"%(project, task, version)
if len(parts) == 3:
if entry.get('is_dir') is True:
# This is a valid publish! Check the data provided
if 0<len(entry.get('files',[])):
start_image = 999999
end_image = -999999
found_numbers = []
for file_entry in entry['files']:
# Expect 'SOMENAME.<four digit number>.<ext>'
image_parts = file_entry['filename'].split(".")
if len(image_parts) == 3:
# Check number
try:
number = int(image_parts[1])
found_numbers.append(number)
if number<start_image:
start_image = number
if end_image<number:
end_image = number
# Check extension
if not image_parts[2].lower() in ['tiff','tif','png','tga','exr','dpx']:
warning_message = "Image '%s' does not have a known frame format extension ('tiff','tif','png','tga','exr','dpx')!"%file_entry['filename']
break
# Here you can check if image size is ok and not varying - detect possible corrupt images.
except:
warning_message = "Image '%s' does not have a valid frame number!"%file_entry['filename']
break
else:
warning_message = "Image '%s' is not on the form 'imagename.number.ext'!"%file_entry['filename']
if warning_message is None:
image_count = end_image - start_image + 1
# Here you can check if all images are present, we check for missing images (holes)
n_prev = -1
for n in sorted(found_numbers):
if n_prev != -1 and n != n_prev + 1:
warning_message = "Image '%d' is missing!"%(n_prev + 1)
break
else:
warning_message = "Directory is empty"
if warning_message is None:
d['ident'] = publish_ident
d['can_publish'] = True
d['path'] = "%s/%s/%s/publish/%s_%s_%03d"%(ROOT_SHARE, project, task, project, task, version)
else:
d['warning'] = warning_message
d['rejected'] = True
else:
d['warning'] = "Only directories can be published!"
d['rejected'] = True
elif len(parts) == 4:
if parts[3] == "preview":
filename_parts = entry['filename'].split(".")
if len(filename_parts) == 2 and (filename_parts[1].lower() == "mov" or filename_parts[1].lower() == "jpg"):
d['ident'] = "%s_preview"%publish_ident
d['can_upload'] = True
d['path'] = "%s/%s/%s/preview/%s_%s_%03d.%s"%(ROOT_SHARE, project, task, project, task, version, parts[1].lower())
else:
d['warning'] = "Previews can only be of .mov or .jpg file type/extension!"
d['rejected'] = True
elif parts[3] == "assets":
# Check so not empty
if 0<len(entry.get('files',[])):
d['ident'] = "%s_assets"%publish_ident
d['can_upload'] = True
d['path'] = "%s/%s/%s/assets/%s_%s_%03d"%(ROOT_SHARE, project, task, project, task, version)
else:
d['warning'] = "Empty assets directory!"
d['rejected'] = True
else:
d['warning'] = "Unknown additional %s asset, only previews and assets are supported!"%(publish_ident)
d['rejected'] = True
else:
d['warning'] = "File are not following our naming convention!"
d['rejected'] = True
else:
d['warning'] = warning_message
d['path'] = "%s/_FROM_VENDORS/%s/%s/%s"%(ROOT_SHARE, data['user_hr'], DAILY_FOLDER, entry['filename']) # Still offer to upload somewhere, you can also choose to reject this.
else:
d['warning'] = "File are not following our naming convention!"
d['rejected'] = True
result['files'].append(d)
generic_print("My results: %s"%(json.dumps(result, indent=3)))
p_output = sys.argv[2]
generic_print("Writing results back to: %s"%(p_output))
with open(p_output, "w") as f:
f.write(json.dumps(result))
sys.exit(0)
Note: The scripts used in this tutorial can be downloaded from GitHUB: https://github.com/accsyn/publish-workflow
The script should be quite self-explanatory if you are familiar with Python. Here follow some explanations:
The "guidelines" are shown to user before they submit the publish, here you can help user by giving example of naming conventions.
By returning "comment":True, you required the user to enter comment metadata associated with publish. We will show later how this metadata can be extracted and stored. Remove this or set to false if you do not require this input.
By returning "time_report":True, you require the user to enter amount of time spent on task. Remove this or set to false if you do not require this input.
By returning "statuses":[..], you require the user to choose a status for publish. Remove this or set to None if you do not require this input.
You can of already in this stage update the task status / create initial version in your production database systems, to prevent a duplicate publish from same user or someone else.
The "ident" field can be customised by you and is not used by Accsyn, here you can for example store ID of project, task and/or version if you like.
Finally, save the script to "/net/vol/_SCRIPTS/accsyn/pre_publish.py".
Configure pre-publish hook
Now we need to tell Accsyn were to find our pre-publish hook script:
Logon as admin and head over to ADMIN>SETTINGS>Hooks and enable "hook-job-pre-publish-server" hook.
Beneath Linux path, enter the following: "/net/vol/_SCRIPTS/accsyn/pre_publish.py ${PATH_JSON_INPUT} ${PATH_JSON_OUTPUT}".
Click [ SAVE ] to have settings saved.
Your Accsyn is now ready to accept publishes and have them uploaded into the correct location.
Write publish hook
We now should write the publish hook script that gets execute after the files and directories have been uploaded to your server. This is of course optional and can be omitted if you do not wish to save user input (metadata) or do any other workflow integrations.
Note: These two publish scripts can be combined into one, by for example adding a --pre command line argument or similar.
The data that comes in are identical to to data arriving to your previous pre-publish script, except that user input/metadata has been appended:
{
"hook":"job-publish-server",
"user":"5d87825f045d0352d33435eb",
"user_hr":"test@interpost.com",
"files":[
{
"id":"96bc4b44-384b-497d-a119-3f07307627b6",
"filename":"proj_task001_v001",
"is_dir":true,
"size":200000,
"files":[{
"filename":"image.0001.tif",
"size":100000
},
{
"filename":"image.0002.tif",
"size":100000
}],
"comment":"My first version",
"time_report":3600,
"status":"for_approval"
},{
"id":"44f29351-e870-4f9e-b329-a43759ed35c0",
"filename":"proj_task1_v001_preview.mov",
"size":1000
},{
"id":"1ff91107-d4d2-4de5-9d87-2b3ed1198e1c",
"filename":"proj_task001_v001_assets",
"size":20000,
"files":[{
"filename":"proj_task001_projectfile.xml",
"size":20000
}]
}
],
"size":221000
}
The publish Python script need to:
Re-identify the project, task and version.
Store user input (metadata) in your production database.
Trigger some post processing of data.
Notify production managers regarding the new version.
#!/usr/local/bin/python3
# Accsyn Hook example Python 2/3 script for postprocessing a publish
import sys, os, json, copy, datetime
def generic_print(s):
try:
if ((3, 0) < sys.version_info):
# Python 3 code in this block
expr = """print(s)"""
else:
# Python 2 code in this block
expr = """print s"""
eval(expr)
except:
pass
def get_version(version_ident):
version = -1
parts = version_ident.split("v")
try:
version = int(parts[1])
except:
pass
return version
def validate_task_and_version(project_ident, task_ident, version_ident):
# Here you could check the project and task against your production database, for example directories on disk or by querying a project management system/Google sheet or similar.
if project_ident.lower() != "proj":
return "Unknown project '%s'!"%project_ident
if task_ident.lower() != "task001":
return "Unknown task '%s'!"%task_ident
if not version_ident.lower().startswith("v"):
return "Invalid version identifier '%s' - has to start with an 'v'!"%version_ident
parts = version_ident.split("v")
try:
version = int(parts[1])
except:
return "Invalid version identifier '%s' - must be a 'v' followed by integer number!"%version_ident
if version != 1:
return "Version %d is not the next publishable version!"%version
return None # All ok
if __name__ == '__main__':
p_input = sys.argv[1]
data = json.load(open(p_input, "r"))
generic_print("Publish hook incoming data from user %s: %s"%(data['user_hr'], json.dumps(data, indent=3)))
generic_print("Analyzing data")
for entry in data['files']:
# Re-identify project, task
parts = entry['filename'].split(".")[0].split("_")
if 3<=len(parts):
warning_message = validate_task_and_version(parts[0],parts[1],parts[2])
if warning_message is None:
project = parts[0]
task = parts[1]
version = get_version(parts[2])
publish_ident = "%s_%s_v%03d"%(project, task, version)
path_server = entry['path'] # Absolute
if len(parts) == 3:
# The published data, here you can mangle the file/directories as needed and create records in production database.
# In this example, we save the user comment, status and time report in a sidecar JSON
with open("%s.metadata.json"%path_server, "w") as f_md:
json.dump({
'user':data['user_hr'],
'comment':entry['comment'],
'time_spent_s':entry['time_report'],
'status':entry['status'],
'metadata':entry.get('metadata')
}, f_md)
generic_print("Saved publish %s metadata to: %s"%(publish_ident, f_md.name))
elif len(parts) == 4:
if parts[3] == "preview":
# A preview made by user, here you can use the preview or generate a more solid custom preview using publish above.
pass
elif parts[3] == "assets":
# Here you can check assets, for example align known paths in work project files so they are openable on-prem
pass
sys.exit(0)
Finally, save the script to "/net/vol/_SCRIPTS/accsyn/publish.py".
Configure publish hook
Now finally we need to tell Accsyn were to find our pre-publish hook script:
Logon as admin and head over to ADMIN>SETTINGS>Hooks and enable "hook-job-publish-server" hook.
Beneath Linux path, enter the following: "/net/vol/_SCRIPTS/accsyn/pre_publish.py ${PATH_JSON_INPUT}".
Click [ SAVE ] to have settings saved.
Finalising
Finally, we test it:
Activate test account "testuser@interpost.com".
On a remote machine, install Accsyn desktop app and logon as this user.
Manufacture a folder with test images, a preview and some asset.
Drag-n-drop the files onto Accsyn and choose Publish.
If you run into internal errors, usually due to missing exec permission on scripts in Linux (chmod 755 the scripts), find clues @ ADMIN>Audits>Job.
Your Accsyn is now all setup for having your subcontractors become integrated in your workflow, in a safe, fast and user friendly manner!