Hi everyone,
I am writing an application for which there are several large internal
collections whose contents have to be filtered and sorted in various ways.
To really take the cake, the contents of the collections may change during
the process (objects coming from other threads).
The number of objects in each collection is fairly high (up to several
thousand) but not so high that I don't anticipate being able to have the
whole model in memory at one time. But it is large enough that I'm going t
o
have to use some kind of indexed access to avoid having to re-sort the
collections all the time.
So a database seems to be warranted here, but there isn't really a
requirement to persist the collections to disk (at least not in this part of
the app) and I don't want to incur the overhead of reading and writing
records to disk. So my questions...
Is there a way to tell SQL Server to create an "in memory" database? I've
looked at "temporary" files, but these look like regular SQL Server database
files that just are automatically dropped at the end of the session. I want
tables/indexes/views that are only contained in memory and get dropped at th
e
end of the session.
Do I really need to worry about this? It's my impression that SQL Server
only writes records to disk when it really needs to anyway, so is my
perception that disk read/writes would slow down the app unfounded?
Thanks for your help.
BBMyuo cant create "in memory" database...
but, you can create "in memory" tempdb
or you can use "in memory" tables - variables
or... optimize your query and use regular db
"BBM" wrote:
> Hi everyone,
> I am writing an application for which there are several large internal
> collections whose contents have to be filtered and sorted in various ways.
> To really take the cake, the contents of the collections may change during
> the process (objects coming from other threads).
> The number of objects in each collection is fairly high (up to several
> thousand) but not so high that I don't anticipate being able to have the
> whole model in memory at one time. But it is large enough that I'm going
to
> have to use some kind of indexed access to avoid having to re-sort the
> collections all the time.
> So a database seems to be warranted here, but there isn't really a
> requirement to persist the collections to disk (at least not in this part
of
> the app) and I don't want to incur the overhead of reading and writing
> records to disk. So my questions...
> Is there a way to tell SQL Server to create an "in memory" database? I've
> looked at "temporary" files, but these look like regular SQL Server databa
se
> files that just are automatically dropped at the end of the session. I wa
nt
> tables/indexes/views that are only contained in memory and get dropped at
the
> end of the session.
> Do I really need to worry about this? It's my impression that SQL Server
> only writes records to disk when it really needs to anyway, so is my
> perception that disk read/writes would slow down the app unfounded?
> Thanks for your help.
> BBM|||SQL Server doesn't support memory-only databases but that isn't really
a problem in terms of performance as SQL Server makes extensive use of
cacheing. If your data is small enough and created only during a single
session then most reads will probably be from cache anyway.
For optimum performance it doesn't make much sense to create and drop
databases and tables at runtime. Create the empty tables you need at
installation and then populate them a runtime. If required you can
always delete the data afterwards but there's no real reason to drop
the tables since you will presumably only have to create them again
later.
As you only need a small-footprint DB, you could consider using MSDE
for this.
http://www.microsoft.com/sql/msde/
David Portas
SQL Server MVP
--|||If these records are to be updated often, then your biggest concern will not
be time it takes to query but rather record locking and other concurrency
issues. Just start off with the idea of an ordinary table, and implement
minimal indexing, becuase several thousand records is actually not a lot, it
depends on the total length of the record, and updating indexes could cause
more concurrency issues. Also, read up on options for transaction isolation
level in BOL. Using "set transaction level read uncommitted" will result in
the least record locking.
What will be the maximum number of records in this table assuming growth
over the next year? When a process queries the table, is it important that
they pull the absolute most recent updates from other processes? Perhaps one
process will not have a need to query across another processes updates?
"BBM" <bbm@.bbmcompany.com> wrote in message
news:FC96FA9F-0ED6-4D26-9B66-0B02A576EB2A@.microsoft.com...
> Hi everyone,
> I am writing an application for which there are several large internal
> collections whose contents have to be filtered and sorted in various ways.
> To really take the cake, the contents of the collections may change during
> the process (objects coming from other threads).
> The number of objects in each collection is fairly high (up to several
> thousand) but not so high that I don't anticipate being able to have the
> whole model in memory at one time. But it is large enough that I'm going
to
> have to use some kind of indexed access to avoid having to re-sort the
> collections all the time.
> So a database seems to be warranted here, but there isn't really a
> requirement to persist the collections to disk (at least not in this part
of
> the app) and I don't want to incur the overhead of reading and writing
> records to disk. So my questions...
> Is there a way to tell SQL Server to create an "in memory" database? I've
> looked at "temporary" files, but these look like regular SQL Server
database
> files that just are automatically dropped at the end of the session. I
want
> tables/indexes/views that are only contained in memory and get dropped at
the
> end of the session.
> Do I really need to worry about this? It's my impression that SQL Server
> only writes records to disk when it really needs to anyway, so is my
> perception that disk read/writes would slow down the app unfounded?
> Thanks for your help.
> BBM|||Thanks to all the responders. You all had good input. Right now I'm going
to proceed just using regular SQL Server Tables/Indexes until I prove to
myself that performance is an issue. I was hoping that there was some way t
o
tell SQL Server to keep a table in memory, but I guess there's not.
Is there a way to tell SQL Server to keep it's cache at a certain size? I'm
familiar with DB2 and in DB2 you can do that by table. Essentially you can
set the cache size for a table so large that the entire table becomes memory
resident.
I am intrigued by some of Aleksandar's responses. Could you elaborate on
what you had in mind with "in memory" temporary tables?
Thanks again for your responses.
BBM
"JT" wrote:
> If these records are to be updated often, then your biggest concern will n
ot
> be time it takes to query but rather record locking and other concurrency
> issues. Just start off with the idea of an ordinary table, and implement
> minimal indexing, becuase several thousand records is actually not a lot,
it
> depends on the total length of the record, and updating indexes could caus
e
> more concurrency issues. Also, read up on options for transaction isolatio
n
> level in BOL. Using "set transaction level read uncommitted" will result i
n
> the least record locking.
> What will be the maximum number of records in this table assuming growth
> over the next year? When a process queries the table, is it important that
> they pull the absolute most recent updates from other processes? Perhaps o
ne
> process will not have a need to query across another processes updates?
> "BBM" <bbm@.bbmcompany.com> wrote in message
> news:FC96FA9F-0ED6-4D26-9B66-0B02A576EB2A@.microsoft.com...
> to
> of
> database
> want
> the
>
>|||Please view my response to JT below... Thanks.
"Aleksandar Grbic" wrote:
> yuo cant create "in memory" database...
> but, you can create "in memory" tempdb
> or you can use "in memory" tables - variables
> or... optimize your query and use regular db
> "BBM" wrote:
>|||Please see my reply to JT below...
Thanks.
"David Portas" wrote:
> SQL Server doesn't support memory-only databases but that isn't really
> a problem in terms of performance as SQL Server makes extensive use of
> cacheing. If your data is small enough and created only during a single
> session then most reads will probably be from cache anyway.
> For optimum performance it doesn't make much sense to create and drop
> databases and tables at runtime. Create the empty tables you need at
> installation and then populate them a runtime. If required you can
> always delete the data afterwards but there's no real reason to drop
> the tables since you will presumably only have to create them again
> later.
> As you only need a small-footprint DB, you could consider using MSDE
> for this.
> http://www.microsoft.com/sql/msde/
> --
> David Portas
> SQL Server MVP
> --
>|||You can set minimum and maximum values for the RAM used by SQL Server
(sp_configure or change it in Enterprise Manager). Data will still be
written to disk however - you cannot avoid this. Even creating
temporary tables will cause data to be written to the tempdb log file.
The point is that with adequate RAM you shouldn't have to spend much
time waiting for disk reads and writes.
As JT indicated, however, there are issues that you should consider
much more important than disk usage. Good database design and
well-written code are far more important factors in determining overall
performance. A bad design or poorly written code can kill even a small
database.
David Portas
SQL Server MVP
--|||"BBM" <bbm@.bbmcompany.com> wrote in message
news:8C8DC765-3E23-403B-A206-E4ED44CB5117@.microsoft.com...
> Thanks to all the responders. You all had good input. Right now I'm
going
> to proceed just using regular SQL Server Tables/Indexes until I prove to
> myself that performance is an issue. I was hoping that there was some way
to
> tell SQL Server to keep a table in memory, but I guess there's not.
> Is there a way to tell SQL Server to keep it's cache at a certain size?
I'm
> familiar with DB2 and in DB2 you can do that by table. Essentially you
can
> set the cache size for a table so large that the entire table becomes
memory
> resident.
> I am intrigued by some of Aleksandar's responses. Could you elaborate on
> what you had in mind with "in memory" temporary tables?
> Thanks again for your responses.
>
There are several methods that you can use.
1. Create the tempdb in memory and then use it. (Not necessarily a
preferred solution.)
2. Use table level variables in your stored procedures. (Not necessarily a
preferred solution.)
3. Use DBCC PINTABLE and UNPINTABLE for tables to live in memory once read
from disk. (Not necessarily a preferred solution.)
4. Preferred solution -- If the entire database is has a small footprint,
then just use regular SQL Server tables and indexes to create and work with
everything. Once data and index pages are read in to memory, unless SQL
Server needs more RAM for something, they will not be flushed back to disk.
Ensure that SQL Server has enough memory to keep everything in memory.
Testing will tell, but I have a feeling that all of the extra work for
PINTABLE, and/or table level variables etc. will probably NOT outperform
letting SQL Server manage itself.
Rick Sawtell
MCT, MCSD, MCDBA|||David and Rick,
Thanks again for the clarifications.
I have already taken JT's concerns into account. My original description of
my problem was inaccurate and you guys are justified in worrying about
concurrency. In actuality, records would be added to the database only by
my central process. Other processes notify my process that they have an ite
m
that needs to be inserted, but they that do it by raising an event that is
handled by my running process and it does the update. As it performs the
update, my main process decides whether the insert affects what it has been
doing and possibly starts over.
Lots of interesting stuff in your responses, but I think I'll take your
advice and make sure I have a problem before I start jumping through hoops.
Thanks again.
BBM
"Rick Sawtell" wrote:
> "BBM" <bbm@.bbmcompany.com> wrote in message
> news:8C8DC765-3E23-403B-A206-E4ED44CB5117@.microsoft.com...
> going
> to
> I'm
> can
> memory
> There are several methods that you can use.
> 1. Create the tempdb in memory and then use it. (Not necessarily a
> preferred solution.)
> 2. Use table level variables in your stored procedures. (Not necessarily
a
> preferred solution.)
> 3. Use DBCC PINTABLE and UNPINTABLE for tables to live in memory once rea
d
> from disk. (Not necessarily a preferred solution.)
> 4. Preferred solution -- If the entire database is has a small footprint
,
> then just use regular SQL Server tables and indexes to create and work wit
h
> everything. Once data and index pages are read in to memory, unless SQL
> Server needs more RAM for something, they will not be flushed back to disk
.
> Ensure that SQL Server has enough memory to keep everything in memory.
> Testing will tell, but I have a feeling that all of the extra work for
> PINTABLE, and/or table level variables etc. will probably NOT outperform
> letting SQL Server manage itself.
>
> Rick Sawtell
> MCT, MCSD, MCDBA
>
>
>
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment