Status
Not open for further replies.

Michael Bateman

New Member
Joined
Jul 9, 2016
Messages
22
Lightroom Experience
Intermediate
Lightroom Version
Lightroom Catalogs have to be on a local volume of course. But referenced images can go on a network drive, which is perfect when you have a lot of RAW images, etc. For my part, I sometimes run into trouble because the volume always gets mounted as "home" or "home-1" etc. which can be a problem if you ever connect to more than one NAS as I often do.

I've discovered a great trick on my Mac with something called autofs. Here is a good article on how to keep network volumes mounted using autofs. This works great with my existing catalog images. I can give the network volume a unique name and it appears in a folder called "servers" in my home directory. Lightroom never has trouble finding it.

...except if I try to import from a folder on the share!
....or if I try to use a folder on the share as a destination for an import!
.....or if I try to browse a folder in the share using Bridge!

Yeah, for some reason my Adobe products just don't like it when I mount network folders this way - and I would love to be wrong about that! Anyone?

Most of you probably don't have a "NAS Centric" workflow like me and rely more on externally mounted USB 3 and Thunderbolt Drives for hot and warm projects, maybe using a NAS for cold storage. (Just remember kids, NAS is not Backup, But can be used as part of an overall backup strategy - but I digress!)

I should probably re-think my workflow. But I just love my Canon 7D with the Wireless File Transmitter WFT-E7A. It does a great job of putting all the files from the camera straight onto the NAS where I can use shell scripts to automate my workflow, renaming files, backing them up, etc.

But in the meantime I thought I would reach out here and see what everyone else does. I am new to the forum; this is my first post. Anyone else here use a NAS with LightRoom? Any ideas for me? I have two: A Synology DS1513+ and a Qnap TVS-871T.

In any event, thanks for reading my post to anyone who has made it this far!

-Michael
 
SQLite locks the entire database when you have an open transaction. It also does not support any kind of user management or privileges. Thus using SQLite for Multi-User applications have many limitations.
Regarding "transparency" it depends: the more database specific functions you use the less transparency you will have. But when you use the database just as a "stupid" data store it becomes easier.

A Lightroom catalog SQLite database uses tables, indexes and a some simple triggers - that's all. Based on such tiny scope it should be possible to support also other RDBMS. But of course, this is a decision Adobe has to take.
Yes, though a well designed system, especially a multi-user system, keeps transactions as short as possible. The vast majority of access in most system (including Lightroom) are read-only. I did not say SQLite was the best choice, simply that it was not a non-viable choice as had been indicated.

I think we are mostly off the subject, though.

The issue for a NAS centric workflow is not about the catalog (unless one cheats and forces the catalog onto the NAS). The issue is about the images, folder updates, and image updates. A NAS is reasonably safe for those operations, however it suffers from some of the same issues as an external hard drive - you are more likely to be disconnected from your data store while updates are occurring than you are with internal storage. For a USB device cable movement/failure, cheap controllers and human error play a role. For NAS, especially Wifi nas, poor quality home gear, RF interference, and human error play a role. But if one takes care to get good quality gear and use it properly both are quite usable. In fact, I would argue that having the catalog on NAS is not materially, inherently more problematic than on a EHD, today.

I use a NAS for backup, but that NAS is one heck of a lot more reliable than my primary storage (albeit slower), but that's because I built it that way. That's not necessarily the case of a randomly selected NAS system, often quite the reverse.

I think the insidious nature of any storage mechanism for photographic use is not hard failure. The "my disk drive failed" scenarios have one of two answers generally -- you have a backup, or you are screwed.

The bigger issue of using more... disconnected (not the right word but close)... solutions for storage is that they increase the number of vulnerable components and the complexity of getting your data from what you see, to where you save it. And for photography, for the most part, there is zero check that what you saved is correct. Images do not have built in checksums or redundancy (mostly, DNG is a partial exception) -- if you write 40MB of image data to a disk, you have no way to know if what you read next time is what you wrote. The more complex the pathway from computer to storage, the more things can screw up either from interruption (as above) or just bad hardware or software in components. For digital systems we tend to treat them digitally -- they are working, or they are not. On or off. That's often right, but actually incorrect. Almost every digital component has some level of undetected error rate. The more components and complexity you introduce, especially in series, the higher the overall error rate.

NAS is "safe". EHD's are "safe". They are used for industrial strength systems. Not trying to say the sky will fall.

But with home-type systems often bought from low-bid suppliers, as a general statement for any random implementation you will find on a photographer's desk, they are not as safe as in-system drives.

I realize the push toward mobile/laptop/tablet pushes all of us toward such solutions, some situations leave one with no choice.

I just offer this ramble (and sorry for the diversion into multi-user database space) as a suggestion: If you are highly I.T. literate, ignore this advice as you already know enough to decide for yourself. If you are not, and you have the choice of in-system drives and NAS or EHD's for primary storage (or same redundancy levels), use in-system drives. The KISS principle still applies.
 
Hello! Just checking back with any to see if anyone has a solid NAS Centric Workflow they can tell us about? I love LR and I love both my Qnap and my Synology DiskStation but I have not yet been able to ween myself off my fast local external drive for my primary workspace. It's a shame because the QNAP has a Thunderbolt connection and an SSD Cache!

I really appreciate the discussion about the SQL Backend and I get why Adobe might not yet or ever support a large multi-user backend for LightRoom but it would be really cool to have a workgroup share photos and editing chores on a project either locally across an ad-hoc LAN at something like a Music Festival or wedding or a WAN with someone at the backoffice getting a headstart on photo editing while some of the team is still shooting.

Take care,
-Michael
 
Not sure what you're looking for. Victoria's list of ways to use a catalog on multiple computers pretty much exhausts the available options. If one of those doesn't suit your workflow, you'll need another tool.

I'm not familiar with e.g. Photoshop -- but suspect it saves edits to the image rather than keeping them in the (single-user) catalog. That might be an alternative approach for your multiuser needs?

The extreme level of dependence on the catalog requires Lr user(s) to enforce a protocol that precludes multiple simultaneous (or "stale") edits to the catalog. Maybe you invent a token -- a stuffed doll -- that a user must physically hold before she can open the catalog on her computer. (Personally, I use myself as a token -- I share a catalog across multiple computers, sync'd via a NAS, but I'm the only person who uses it -- thus eliminating the possibility of simultaneous multiuser access.)
 
Which NAS do you have and which sync tool? That sounds interesting. How big is your catalog as you use it? What kind and size local storage do you use? Do you ever have to sync to your NAS on the road (not on your own LAN?)

I might be better served with something like PhotoMechanic but I do so adore Lightroom and how it lets you batch process and I have gotten so comfortable with it. Your approach intrigues me. I used to keep all my photos in DropBox and just selectively sync which photos I was working on from where I was but I hit the upper limit of DB for Teams storage. Perhaps I could do this on my NAS. Perhaps that’s what you are describing.

As these cameras get bigger and faster LightRoom is gonna have to evolve to accommodate larger shared storage and workgroups me thinks.

Thanks very much for sharing your thoughts.

Michael
 
Michael,

I had a NAS setup on my Mac a while back. It worked fine, I custom built the file server using Linux with a Samba (SMB) network service.
But in any case, the critical aspect is how the NAS is named. This needs to be unique for the Mac to correctly mount it under the volumes area in a consistent manor.
I no longer have the Mac or the file server (I switched to Windows and local storage); so I cannot look at any details.

Tim
 
Does anyone know how to tap the Adobe Metadata written to disk outside of LightRoom, Photoshop, or Bridge?

(Marking a file with a color does not seem to translate to the OSX tags, etc. Nor do the other pick flags, etc. )

What I want to do is work off a fast local drive that’s backed up frequently to a NAS. One of the problems I have is deletion. I want to first mark a file with the rejected flag then wait for that to sync to the NAS before actually telling LightRoom to “delete the files marked for rejection.”

Then I just need a shell script to run across the NAS volume where the photos are kept and physically delete files with the rejected flag set. (Otherwise when I go back to an older project the deleted files don’t keep coming back to haunt me.

I shoot a lot of birds you see. Usually in RAW and sometimes also with bracketing and very often shooting hundreds of frames trying to catch that “fight shot!”

So I need fast local storage and either an unlimited amount of archive space or some ability to manage it! ; )

Michael
 
I want to first mark a file with the rejected flag then wait for that to sync to the NAS before actually telling LightRoom to “delete the files marked for rejection.”

Then I just need a shell script to run across the NAS volume where the photos are kept and physically delete files with the rejected flag set.

OK, I'll bite -- why wait for it to sync if your plan is to delete what it just sync'd?

Does anyone know how to tap the Adobe Metadata written to disk outside of LightRoom, Photoshop, or Bridge?

First, the metadata is only written if you have the option turned on (which slows down lightroom), or if you do a "write metadata" explicitly. It's fine to do either, just mentioning it.

Secondly, WHERE it is written depends on file type, e.g. it may be different in TIFF, PSD, JPG and regular RAW. I can't recall for DNG. While it's doubtful you use all of those you might use some. in regular raw it's in an XMP file and scattered hither and yon. The simplest way to find how for any field of interest is this:
  • Without the file marked, copy the XMP somewhere else.
  • Mark the file
  • Do a text file comparison between the XMP file before and after.
If it's not an XMP file, unless you love pretty technical editing, you can't "see" the metadata directly. You might look at a tool like EXIFTOOL to extract it separately then search the extraction but now it's getting even more complicated.

On the Mac there are a bunch of comparison programs available though I never use any (here is a list I ran across, I know nothing about them, just a starting point).

Third, finding them on NAS depends on tools you have, does Mac have "grep" like regular unix, that's very powerful and can find and delete at the same time.

Fourth and most important (and a bit of a surprise) the Reject file is not written to disk in the XMP file. It's apparently only stored in the catalog. I just did a trial run and a comparison before and after and don't see it there. So you'd have to select all the rejected files, mark some other metadata that is written, and then go by that.

But I'll come back to the first question -- why back it up before you cull, if your intent is to delete the backup? I would get that if the backup was a safety copy you kept, but why go to the trouble and then delete it. Cull first?

If the reason is that it backs up automatically and it's going to catch it before you cull (and if it was OK to not back up that fast), do your initial imports into a folder on the same fast disk that is set not to back up, e.g. two parent trees like \photos and \photosNoBack, with the same folder names underneath, then as soon as you are done culling, delete the photos, and use Lightroom to move the subfolder (e.g. \photosNoBack\20170917) to the backed up folder (\photos\20170917). If they are on the same disk it's instant -- no photos are actually copied, it just moves the subfolder and updates the catalog.
 
Yes. What I can’t do is ask a shell script to take an action based on an Adobe Metadata Tag.

That’s the question I am asking.

Thanks!!

Michael

No. Bets bet, write a shell script to look for missing files. So look at all files in the backup, and compare against the master. If the file is not found on the master, delete it.
I have written such scripts before, in Java, in Bash... So it can be done. I do not have any handy examples.

Tim
 
Actually, even better. Use rsync. Functionality is built in.

Tim
 
Can you write your own scripts.
OK, I'll bite -- why wait for it to sync if your plan is to delete what it just sync'd?.

My fast local drive is backed up per best practices.

It’s backed up to the NAS.

As I work, changes are written to the fast local drive. A chron task syncs any changes to the NAS backup in the background.

Now. In this scenario. Running the “Delete all photos marked as rejected” only deletes the local copy. It does not, itself, know about the backup I have created outside of LightRoom on the NAS. And if the local copy is deleted before the “rejected” tag syncs to the backup the file won’t delete off the backup.

Yes there are ways of syncing deletions and I have some long, complicated, boring reasons why those won’t work with my workflow.

This is entirely tangential to my question but all very much appreciated and it’s entirely possible that you help me in a way I was not anticipating.

There seems to be a unifying standard for how metadata is written to disk or Bridge would not see changes made in LightRoom. I am hoping it’s not a proprietary adobe format that won’t allow me to script my workflow outside of Adobe products.

In short: does anyone know of a shell command that acts on a filespec based on an attribute set from LightRoom? Or any framework that would allow me to script a workflow outside of Bridge/LightRoom, Regardless of what I am trying to accomplish? Which is still open for discussion mind you, I just would like to know the answer to this query if someone has one.

Thanks!

Michael
 
Michael, the reject file isn't in the file, it's in the catalog. One option is to write a SQL script against the catalog before you delete the files, that creates a delete command for each file spec marked rejected, then just run that command (possibly edited to adjust the root folder) on the backup system.

But... did you consider my idea of not backing up the folder with new images until you have culled? Or are you culling old items, not just a recent shoot?
 
Actually, even better. Use rsync. Functionality is built in.

Tim
Thanks very much I am totally up for that but I sometimes delete files from the local fast drive that I still want to be able to recover. Imagine for instance I want to keep all five star photos from a year ago and all four and five star photos from up to six months ago. So I delete a bunch of 1,2,3 and 4 star photos from the local drive accordingly. I do this NOT wanting to delete them from the backup. This, as distinguished from photos I never want to see again for the rest of my natural life.

Now, I do not ACTUALLY rely on the Star ratings for my workflow. I just used that as an example. I know this is a forum for Workflow Strategies but I am asking a specific question that if I had an answer to I could more easily engage with all of you on what or how I might do differently.

Am I really the only one who wants to automate a workflow in some fashion outside of LightRoom that uses Metadata written by LightRoom? It seems this would be a pretty handy thing to be able to do yes? Workflow automation that Works, That Flows, and is Automatic?! ; )

Can rsync tap Adobe Metadata in the filespec?! That would do it I think.

Thanks everyone for indulging me in this quest!

Michael
 
Michael, the reject file isn't in the file, it's in the catalog. One option is to write a SQL script against the catalog before you delete the files, that creates a delete command for each file spec marked rejected, then just run that command (possibly edited to adjust the root folder) on the backup system.

But... did you consider my idea of not backing up the folder with new images until you have culled? Or are you culling old items, not just a recent shoot?
I am pretty sure the reject tag is written out. If you mark a file as rejected in LightRoom and close LightRoom you can see it in Bridge.

Thanks for the suggestion but 1) I never work on my only copy of something, and 2) yes actually sometimes I shift from one project to another.
 
As you can write scripts I can suggest the following.

Select within Lr the images you wish to process on your Nas. (eg All files marked for deletion in Lr)

Then you can use LrTransporter or JB Listview to create a csv file (one record per image) which contains the fields of your choice available from within the metadata. The csv file only holds details for your selected records.

Use the CSV file combined with a script or an app to process the files on your NAS as you see fit.

So ...most likely you will use the existing filename/folder name to identify the image you wish to process on the NAS, but will need some rule to identify it on the NAS (eg change the leading characters in the foldername string).

You will need to devise your own workflow and controls to make sure you work in the proper sequence.

I use LrTransporter or JB Listview to create csv interface between Lr and lots of other applications, including InDesign to create preformatted high quality prints using Title and other metadata fields, Ms Word to create PDF A4 documents with an image, title and caption per page, or Photoshop to handle applying a template to a set of images.

I hate inefficient workflows and am upset that Adobe does not fully understand the word Workflow, especially between their own products (ie why can we not create books within InDesign without creating intermediate files from Lr).

There are some 'gottchas' which you may bump into if using with other Adobe apps. If you have specific queries I will be happy to try and answer.
 
Does anyone know how to tap the Adobe Metadata written to disk outside of LightRoom, Photoshop, or Bridge?

(Marking a file with a color does not seem to translate to the OSX tags, etc. Nor do the other pick flags, etc. )

What I want to do is work off a fast local drive that’s backed up frequently to a NAS. One of the problems I have is deletion. I want to first mark a file with the rejected flag then wait for that to sync to the NAS before actually telling LightRoom to “delete the files marked for rejection.”

Then I just need a shell script to run across the NAS volume where the photos are kept and physically delete files with the rejected flag set. (Otherwise when I go back to an older project the deleted files don’t keep coming back to haunt me.

I shoot a lot of birds you see. Usually in RAW and sometimes also with bracketing and very often shooting hundreds of frames trying to catch that “fight shot!”

So I need fast local storage and either an unlimited amount of archive space or some ability to manage it! ; )

Michael
If you want to preview your RAW files outside LR and also mark them for deletion, try About | FastRawViewer. It's very inexpensive and fast.

Phil Burton
 
Thanks very much I am totally up for that but I sometimes delete files from the local fast drive that I still want to be able to recover. Imagine for instance I want to keep all five star photos from a year ago and all four and five star photos from up to six months ago. So I delete a bunch of 1,2,3 and 4 star photos from the local drive accordingly. I do this NOT wanting to delete them from the backup. This, as distinguished from photos I never want to see again for the rest of my natural life.

Now, I do not ACTUALLY rely on the Star ratings for my workflow. I just used that as an example. I know this is a forum for Workflow Strategies but I am asking a specific question that if I had an answer to I could more easily engage with all of you on what or how I might do differently.

Am I really the only one who wants to automate a workflow in some fashion outside of LightRoom that uses Metadata written by LightRoom? It seems this would be a pretty handy thing to be able to do yes? Workflow automation that Works, That Flows, and is Automatic?! ; )

Can rsync tap Adobe Metadata in the filespec?! That would do it I think.

Thanks everyone for indulging me in this quest!

Michael

No, Rsync cannot look at such metadata.
Sounds like a complex workflow. But in any case, I see a few possible choices:
1. Add a manual step such a Gnitts suggested to capture the file list, and then process is manual via another script.
2. Write a script that when Lr is closed, it can open the database file via SQLLite and query for all files matching the reject flag.
3. Write a LUA script and add it to Lr to do what you want.
4. Change your workflow.

Tim
 
I am pretty sure the reject tag is written out. If you mark a file as rejected in LightRoom and close LightRoom you can see it in Bridge.

I did this in 2015.12:
  • Write XMP
  • Copy/save that file
  • Reject that image
  • Write XMP
  • Diff the two XMP's.
I got this, which doesn't seem to indicate the flag.

19c19
< xmp:MetadataDate="2017-09-27T15:05:59-04:00"
---
> xmp:MetadataDate="2017-09-27T15:05:19-04:00"
77c77
< xmpMM:InstanceID="xmp.iid:edd95aec-8e62-674d-acf4-1d98cf0bfe2a"
---
> xmpMM:InstanceID="xmp.iid:48b76329-862c-df48-940a-2bbffb3f24c2"
231,232c231,232
< stEvt:instanceID="xmp.iid:edd95aec-8e62-674d-acf4-1d98cf0bfe2a"
< stEvt:when="2017-09-27T15:05:59-04:00"
---
> stEvt:instanceID="xmp.iid:48b76329-862c-df48-940a-2bbffb3f24c2"
> stEvt:when="2017-09-27T15:05:19-04:00"​


I did it again with a star rating of 1 star and got a clear difference:

19,20c19
< xmp:MetadataDate="2017-09-27T15:10:19-04:00"
< xmp:Rating="1"
---
> xmp:MetadataDate="2017-09-27T15:05:19-04:00"
78c77
< xmpMM:InstanceID="xmp.iid:57bb73ad-ed0c-7e49-b458-ba0a0669e2a3"
---
> xmpMM:InstanceID="xmp.iid:48b76329-862c-df48-940a-2bbffb3f24c2"
232,233c231,232
< stEvt:instanceID="xmp.iid:57bb73ad-ed0c-7e49-b458-ba0a0669e2a3"
< stEvt:when="2017-09-27T15:10:19-04:00"
---
> stEvt:instanceID="xmp.iid:48b76329-862c-df48-940a-2bbffb3f24c2"
> stEvt:when="2017-09-27T15:05:19-04:00"​

I have no explanation why you can see it in bridge; I don't use bridge, but I downloaded it and tried it, and I cannot see any sign of a reject when Bridge views the file.

Maybe you did it the reverse way -- rejected in bridge and saw it in Lightroom?
 
Status
Not open for further replies.
Back
Top