NEW LOOK of this site. Do you like it?


The fields are there on profiles to enter name and location if they are not visible its a users choice


User online status indicators are planned with advanced search as well. We have a small team of programmers so unfourtanetly things will take us considerable time but we are in a continuous update mode so things are be worked on monthly.

Let us know what you want and need and we will continue to work on updates. Phase 1 is migration and bugs which we have about 60 to 90 hours left of before moving to the the next stage. Your request may not be directly next but they do get listed when spots are available for updates


Also searching stuff on googlo now gets rather confusing, try searching for ‘CGTalk Real time refresh/upgrade rollout background’. You get this:

But clicking the link takes you here:

Also, I have trouble logging in on several computers, different browsers, so I just frequent cgtalk less now.


So do I.
Searching is now practically useless. And it was, by far, the most important thing for programmers.


The visual “improvements” are terrible and the functionality is even worse.


Does anyone know how to edit a post posted few months ago?


can you restrict search to a particular forum ? otherwise the site is next to useless


For the most part a good effort but I really hate the way the reply textbox functions. It’s ok on mobile but on desktop, squeezed all the way the bottom of the screen - just awfully awkward.


Having a harder time seeing the difference between Artstation and CGSociety anymore.


could go here and give them a piece of your mind I guess:

“CG+ was built with a huge focus on delighting its users. The sleek dark design brings a touch of class to the application making CG+ a pleasure to use. Always. And of course, we haven’t skipped anything when it comes to great usability.”

Search doesn’t work, stats out of whack, but hey, it’s the ‘look’ that matters…


Admins… What the hell are you doing with this forum???



advanced search should be up and working sometime in Janaury. its being worked on now.
500 errors are being fixed as well thanks for listing what you are experiencing

will see


Since January 1st, I don’t receive notifications, nor emails either from posts I’m watching.


It would be good to have the possibility to download this whole forum, even in plain text.


In case anyone would like to try, here’s the CSV file with post count, views, thread link and date.
2003 - 2019 years (957.0 KB)

You can try this javascript snippet in your browser developer tools to parse any cgs forum section.
It first tries to scroll down to a maximum and then scroll back a little to force new threads loading event. Maybe you’ll need to tweak numbers for it to work on your pc.

var times = 0;
var scroller = setInterval(function() { 
	if ( times++ < 1000 ) 
		window.scroll(0, window.scrollMaxY );
		window.scroll(0, scrollY - 150 );
	} else { clearInterval(scroller); console.log("finished!"); } 
}, 300);


Thank you @Serejah for the threads list.

How would you then download each thread to a separated file having them use a common folder for the assets?

Even if we could crawl the whole forum, I don’t know the CGSociety policies about this.

I was thinking more on the idea of having a sort of indexed “digested mailing list” that CGSociety would provide us. A simple .TXT file per thread organized in folders, preferable with the “code” tags to make the parser’s work easier :slight_smile:


This thread soon will become the longest one in whole mxs sdk section :slight_smile:

	page_crawl_posts_step = 15

	-- foreach threadData in CSV do
        -- sleep for a reasonable amount of time not to disturb cgs servers with lots of requests

	url       = @""
	thread_id = (tmp = FilterString url "/"; tmp[tmp.count])
	savepath  = @"C:\somefolder" + "/" + thread_id
	postcount = 10
	if not doesFileExist savepath do makeDir savepath
	if postcount <= 20 then
		dragAndDrop.DownloadUrlToDisk url (savepath + "/" + (thread_id as string) + ".html") 0
		for i=1 to postcount by page_crawl_posts_step do
			dragAndDrop.DownloadUrlToDisk (url + "/" + i as string) (savepath + "/" + (thread_id as string) + "-" + i as string + ".html") 0		

But it is just raw data not viewable in a browser. And also it seems like you can’t get more than 20 posts per request.

I’d also prefer to have entire thread in a separate file, but this is much more complicated since it will require either to combine several saved files in one pragmatically or use some headless browser to scroll-up-down each thread from top to bottom before saving it to disk.

Saving the content we did for personal use shouldn’t be forbidden I guess. Why would search engine web crawlers be allowed to do so?


it should be easier to take source html as:

wc = dotnetobject "System.Net.WebClient"
webData = wc.DownloadString ""

after that we need to convert it to xml for easier parsing


I used HtmlAgilityPack for offline mxs reference html parsing. It is pretty performant and easy to use



it’s what i’m looking at right now :slight_smile: