Skip to content

Commit fe59dee

Browse files
authored
Update README.md
1 parent 7044581 commit fe59dee

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@ The spider functionality is what gives Crawlector the capability to find additio
126126
- A URL page is retrieved by sending a GET request to the server, reading the server response body, and passing it to Yara engine for detection.
127127
- Some of the GET request attributes are defined in the [default] section in the configuration file, including, the User-Agent and Referer headers, and connection timeout, among other options.
128128
- Although Crawlector logs a session's data to a CSV file, converting it to an SQL file is recommended for better performance, manipulation and retrieval of the data. This becomes evident when you’re crawling thousands of domains.
129-
- Repeated domains/urls in the cl_sites are allowed.
129+
- Repeated domains/urls in the `cl_sites` are allowed.
130130

131131
# Limitations
132132

0 commit comments

Comments
 (0)