So I inherited a site that has had several different job file from Trimble Access. This site was done very crazy. LOL. They basically used a scale of 1.0000. In data collector and traversed over several days. The did translate and rotate several different times so the same point numbers have different values and such. And also the rubber sheeted some data to make it match some imagery and in this traverse several different job files they updated coordinates in new file and kept traversing. When I inherited the site. I was told I needed to have the crews use gps and I needed to get all the control on SPC grid. ?ÿThis again is at a location and elevation that it is not close. I am comfortable with going grid to ground. I would have traversed on grid instead of set scale to 1.0000000. If I set up the gps and rtk data on some common points will TBC bring the scale 1.000 to grid correctly or am i out of luck and need to export the optical spreadsheet and do it long hand. I have never mixed apples and oranges i always choose one or the other and keep my rtk and conventional in field all on the same datum which is easy and if i need ground coordinates i scale everything and lop off the false north and easting and have both sets of coordinates just encase.?ÿ
I am already pulling my hair our as the same point has several different coordinates from translation and rotation in field a few times and office taking and shifting it to look good on imagery that makes it look like spc. And then now having some opus derived positions and rtk vectors. This about a mile long and up and over a mountain. One of you experts help any.?ÿ
You mentioned OPUS. If that (those?) position(s) were input/derived correctly, and they gave decent results, sounds like you may be able to salvage this.
Hopefully, when they ran RTK, they stored the observations as vectors, and not as coordinates only. If only the coordinates were stored, then reobserving is likely the safest/easiest bet.
If I set up the gps and rtk data on some common points will TBC bring the scale 1.000 to grid correctly
The short answer is yes. The long answer is yes, but it will take some work to insure your TBC stays in the one single target coordinate system you desire; i.e. SPC, without accidentally (automatically) switching them during import, etc.
If you have some .job files that are GPS only, and TS only, this shouldn't be too tough, but if you have a .job file(s) with GPS and TS together, things may go sideways quick.
If the .job files contain a mix of GPS & TS observations, there is a fair chance that you have lots of disjointed "segments" between all your .job files within TBC because of unique PID's having multiple points of reference, that will cause havoc. Example:
The did translate and rotate several different times so the same point numbers have different values and such. And also the rubber sheeted some data to make it match some imagery and in this traverse several different job files they updated coordinates in new file and kept traversing.
Can you tell if they linked the job files together within Access, or is every single .job file a stand alone entity?
Here's the crux of the issue, if I understand you correctly:
In order for the .job files to come in correctly, they need a (single) point of reference, which also contains a (good) quality ellipsoid height to perform the math for the geodesy/mapping projections.
Your GPS files (observations) sound like they were derived from WGS84 "real world positions" that can be dumped into TBC, and TBC knows what to do with them for various projections, etc. So far, so good. Again, assuming vectors were collected, and not just points with coordinates.?ÿ
Your TS files likely contain no information that TBC can use to relate the observations to real world positions. If you try and dump a 1.0 Scale TS .job file into TBC that is set up for SPC, you'll likely get an error. OR, your dataset may come in, but be at the coordinate values that are nowhere near real life, and likely have other issues.
Bringing that same 1.0 Scale TS .job file into a TBC project set up for 1.0 Scale will likely work, but when you attempt to change the coordinate system, you'll likley get another error for the same reason as above.
This is a really high overview of the issue, and there is a lot of wrangling to be done behind the scenes to pull this off and get good/correct values.
There are a few different approaches to this problem.
I usually always start with what i know is good, add a .job file/segment, "massage the segment", do my checks, rinse, repeat.
Creating a TBC project, dumping all the .job files in at the beginning, and stepping through the processing will work too, but it can be overwhelming and throw lots of confilcting errors that may not let you proceed, etc.
So, yes. It can be done. But it's probably not for the novice, or faint of heart.
Ditto, and start with cleaning up the point names too, add prefix or suffix IDs so you're not constantly looking at the same wrong or good data. Depending on how big the data sets are, you can evaluate the time it's going to take for recollection or also the time to work through and determine what data is even useable.
Copy everything into a separate directory for JIC.?ÿ I know you know this, I'm just typing it So i remember...
Break down jobs by types of data, dates etc, crews even and build a master project VCE where you can assemble the jigsaw together based on what MIother Left mentioned.?ÿ?ÿ
I ran into a few problems that were similar in a job I was working on in Kuwait when I first found this site in 2018.
Data was collected in a local calibrated fashion and just hit me up side the head wilhen I found out. Localized and scaled to ground. I wasn't going back to Kuwait, and had other work to continue with so it was a mini disaster until I was able to wrangle the details.
I took lots of time figuring out what they had done and like Mi other Left said, it's not that it's impossible it's just very tedious, unless it becomes impossible.
HMU if you want to walk through some of it.
@michigan-left Thank you. Yes I had them do rtk and logging when I sent them up. So they logged data at the base while locating some of the control via a here position. Then next day because of terrain and such the went to other end of project and did rtk and logging and was able to locate the other half and over lap on a few control points from the traverse. It is not the best data but I can atleast put it in the right area. ?ÿI sent the data off to opus i er a week later and have some decent opus positions that fit the middle with in reason from rtk vectors. Now I am trying to bring in the traverse files one at a time and clean them up. They never closed the traverse so it was just open. This job was done prior to me being hired. ?ÿI have some wiggle room for accuracy so that helps its just getting that scale 1.0000 files cleaned up with the bogus coordinates from translate and rotate and then rough imagery coords. Every conventional job file is only the TS work thank goodness. But every day they re smoosed themselves to fit what needed to be done. So just a headache really. I was just wanting to make sure by bringing in the lat long ellipsoid ht from opus would allow TBC to take the ground traverse and reduce to ellipsoid and re project correctly so i have all apples . ?ÿI guess they were planning on doing the whole job with rtk but they had issues so they started just traversing and making stuff work . ?ÿThe LS had never ran the gear before and it was a young crew so they all just made it work the first week. And Now i get to try and fix it is all. ?ÿWe have to go back in a few weeks and i want known control instead of all the here positions and assumed coordinates.
Why not take the raw data (angles, distances, and vectors) and adjust them OUTSIDE of TBC? That is what I always do. I use either Geolab or Star*Net depending on project size (area), types of observations, etc.?ÿ
Sounds like you're on the right path and your GPS appears to be the most reliable data to start. I would be sure to try and hold all of your "correct" GPS positions as gospel (to begin) and once you've got all your PID's figured out and correct (per @jitterboogie), just go in and disable the coordinates for all the wonky traverse point setups. TBC really does make qa/qc, editing data, fixing setups easy and intuitive.?ÿ
I would try holding one of your OPUS points fixed in lat/long/ellipsoid height (global; for NAD83(2011), epoch 2010; hopefully you're not also dealing with HTDP for ultra up to date positions?), run the LSA, and see how you hit the other OPUS point(s). If it's really good, hold all the OPUS similarly fixed and LSA the whole thing constraining 1, 3, or more OPUS points. (else you may introduce a vertical slope across the project)
Remember to tell the rest of the staff and the brass to use your new SPC values for everything and to rotate, scale, rubbersheet in every other data source to said new values. This is probably going to be the hardest part of the exercise.
@michigan-left yes lat long ellipsoid always for gnss. Up until I know everything is working and not tilting for sure. I had a job a person was fixing the elevation and boy did it go sideways very quickly. Lol
Up until I know everything is working and not tilting for sure.
Fixing ortho heights is (can be) a great idea if you have leveling or shots on bm's, especially if there are mountains/lots of relief. Not sure what the geoid looks like in this area.
Already some good advice here, so I won't rehash it.
I think you'll find that the most tedious part of this project is the merging of points with different numbers and removing bogus coordinate records that were stored in Access from the translate/rotate/rubbersheet operations.
I prefer to set the coordinate system and first bring in all of the raw data files before importing control coordinates such as OPUS solutions or published control - make sure those are imported or changed to Control class, of course.
Once that is done, if you couldn't figure out which points need to be merged on import, it will be necessary to zoom around the project and manually grab the points to merge.
But once everything is merged, use the Selection Explorer to grab all the unadjusted/bogus coordinate records from the imported JXL/JOB files, and just delete them. When you recompute afterward, it will first find the Control class coordinates from OPUS or wherever, and flow out observations from there.
This gives you a clean project with good a priori coordinates and control values to start constraining to.
?ÿ
Tying in the same point with a different name in the field is just poor practice and can cause some major errors in the office, and if it doesn't cause an error it's just a pain to figure out what is what.?ÿWith proper data collection methods and good field procedures everything should come together as the data are imported, and there should be little to no merging.
TBC adjusts on grid or ground without a problem, so as long as you keep your coordinate system set properly and don't override it when importing data, you'll be fine.
@rover83 yes that was my biggest question is since all the robotic work was done in Trimble Access at a scale of 1.000000 if I brought it into the TBC project set to nad83 spc but had the gos data in first and once i merged them would it reduce the ground traverse to ellipsoid and red project at grid. ?ÿIt is slow and tedious for sure. Least squares I understand TBC just moving around is a little slow but I am getting through it. Lol right at a mile of data plus they did topo and evry job instead of linking the imprted all points every time lol. So lots of un necessary data i don??t need. Its a cluster for sure.
In a similar suggestion as @john-hamilton mentioned, you may consider pulling out the control (deleting the topo) out of the TS files, and getting just the control data (GPS+Traverse) adjusted by itself, getting final control values, and pop those into a separate TBC project for topo processing with good control per .job file???
I've probably done this type of thing any number of 49,382 different ways you can think of.
As long as your control is homogenous and fixed, and you are able to keep track of what you did, the topo will fall into line, and any GIS, imagery, etc. is added all is right with the world.
Ugh, they were running topo and importing previous days' work? Why? Just why?
I don't have too much of a problem with running both control and topo in the same job. Sometimes it's easier, sometimes it's necessary. If point range discipline is maintained it's easy to wipe out the topo work and save it for later.
But the topo processing will inevitably be a pain too, since they decided to call the same point different names. Once you have adjusted control coordinates in a separate file, you'll have to process the topo data in a new TBC project, which means sifting through and merging all of those same unadjusted different point names all over again - and afterward making sure the observations flow out from the control-class adjusted coordinates you import.
TBC will easily adjust control & topo in the same project, but it is a little harder to keep track of things. You'll get there eventually, but an easier solution is to make sure field procedures are up to snuff.
Run control first, don't &*%$ with point names, don't screw with field translates and rotates, maintain a cloud project with up-to-date information, do intermediate adjustments in the office if necessary and push them to the cloud. Once control work is done, adjust the control prior to running topo if at all possible, but if not, don't ever &*%$ with point names, so that during topo processing there's no confusion.
?ÿ
In any case, I'd make sure the powers-that-be get a detailed download of the problems that are caused by poor field procedures. They're making it way, way harder than it needs to be and causing you unnecessary hassle.
@rover83 well i am with you on point names. And why do people want to separate conventional and rtk with software like Trimble Access and TBC when doing control work. If they would have had that done they would have known something was wrong on 2nd observation no matter where it came from. A mile long traverse and a 300 foot difference in elevation and 1600-1900 ft elevation at as far west as you can get from central meridian on a lambert and mostly north south run. And hold one point scale factor at low end. Tis why i wanted it all in grid first. Because just to many issues using one point to scale from for grid ground. I was going to get an average elevation of site first after i get everything fixed. And compare grid ground ppm. To see before accepting it.
@john-hamilton I have not ran starnet in many moons Lol. But TBC handles it all nicely. Just poor procedures and was done without me being here the previous work and lots of just making it look good which was not good in reality. I have it whipped now but its not the best set of data as several blunders with no redundancy. We are going back and i already have what I need them to do so i can add more headaches to myself. I have it close from one end to the other but I had to force it to use some vectors in the middle that I know are not the best. I have some areas that i have made assumptions based on what data i have where a few rod height bust might be so maybe i will confirm this with the extra redundancy i am requesting. But it probably meets the project scope now its just not what my geodetic side would ever accept lol. This one mile gave me more fits than running geodetic control GPS static levels via dini over 7 miles and conventional traverse and triangulation all together. Just some not so good data is all. And the approach should have been different.
I have encountered many surveyors over the year who name a point differently whenever they re-shoot it.?ÿ
?ÿ
I would like to hear from someone who does that...WHY??ÿ
?ÿ
Also, I try to always use unique point numbers except on dam deformation surveys, where points are already named and often duplicate between projects. My client will give me GCP01, GCP02, etc on every mapping project, we assign GPS ID's to those points that are unique, using the five digit project number plus one, two, or three letters depending on how many points there are. For example 22042AA, 22042AB, 22042AC, etc on project 22042. The field log software I developed that we use on GPS jobs keeps track of it all and the coordinates database gets populated with both a point name (often assigned by the client) and a GPSID (assigned by me). Processing is done with the unique ID.?ÿ
At previous companies that I worked for we started at 1 and sequentially numbered GPS points, when I left we were in the 10,000's. Anytime we went back to a point that we had previously surveyed, it got the old number.?ÿ
I hardly ever do topo surveys with many ground shots, so that might be an exception, but those points are usually not marked. When we do topo, we uniquely name the setup points, but unmarked points are often just assigned 1001, 2001, etc.?ÿ