Apparently modern search engine algorithms limit results of broad queries, but if the query contains a term or some other input that is more specific, then the results may contain web sites that are more-or-less keyed to the term, and those results tend to be more relevant.
Here are some of the techniques I use to search the web.
- Search a particular top level domain with the queries such as “site:.edu”, “site:.gov”, or “site:.energy”. Those may provide results more relevant to oil and gas, especially when paired with additional search terms.
(Note that in case you’re not aware, the web has many new top level domains since its origin. The .energy top level domain is a top level domain that may be relevant to you, and there may be other top level domains of interest).
- Use a search engine to search a particular website, e.g. “site:www.energy.gov.” Again, you can supplement the query with additional search terms. (Note that this query is for the US Department of Energy’s website, and not related to the .energy top level domain above).
- Some websites offer an integrated search feature. Those results tend to be more complete because crawling algorithms have fewer data to crawl and thus aren’t overloaded. The quality of results vary from site to site, and be aware that some websites simply redirect their searches to a search engine.
- You may consider using a different search engine. You may find a search engine that uses its own spider and has its own index. Otherwise the results will likely be similar to the umpteen different search engines that use the same index. Whenever you use a search engine, keep in mind that the results may not be good, and it’s up to you to make that determination and not use the search engine if the results are not good.
- You can sometimes bypass a search engine entirely. There are at least a few ways to find websites without a search engine.
5a) One way that I know of is to search the source code of open source software to see what websites it queries.
5b) If you already have relevant software but not the source code, you can use the GNU strings program, which is part of binutils, to get the hard coded strings of the software, and see whether any of those are websites.
5c) You may also find websites listed in printed material including magazines and journals.
There will be figurative legwork involved. I hope this information helps.
EDIT: Some web sites have documented APIs, so look for those in your searches.